[gfx1201] Enable RMSNorm support for gfx1201#4
Closed
big-yellow-duck wants to merge 1460 commits intomainfrom
Closed
[gfx1201] Enable RMSNorm support for gfx1201#4big-yellow-duck wants to merge 1460 commits intomainfrom
big-yellow-duck wants to merge 1460 commits intomainfrom
Conversation
Aiter fails import test with error ModuleNotFoundError: No module named 'packaging' The aiter package imports and uses 'packaging' module at runtime in multiple files, but only declares it in setup_requires (build-time) instead of also declaring in install_requires (runtime). This causes "ModuleNotFoundError: No module named 'packaging'" when importing aiter in environments where 'packaging' is not already installed. This PR looks to fix this issue by patching setup.py to include aiter packaging as a runtime dependency. Signed-off-by: Anu Oguntayo <aoguntay@redhat.com>
GitHub Actions CI pipeline is aborted if a process exits with a code other than zero. This commit fixes a bug in Triton test selection script, no matter the test selection outcome, CI pipeline shouldn't be aborted.
* enable gptoss_sink Signed-off-by: Linjun-AMD <Jun.Lin@amd.com> * Update csrc/py_itfs_ck/mha_batch_prefill_kernels.cu Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update mha_batch_prefill_kernels.cu * update mha_bwd parameter Signed-off-by: Linjun-AMD <Jun.Lin@amd.com> * Update mha.py * Fix formatting for bias argument in rocm_ops.hpp * fix some format error Signed-off-by: Linjun-AMD <Jun.Lin@amd.com> * Update mha.py * update args Signed-off-by: Linjun-AMD <Jun.Lin@amd.com> * Update mha_fwd.cpp * update ck commit Signed-off-by: Linjun-AMD <Jun.Lin@amd.com> * use atier main branch ck commit Signed-off-by: Linjun-AMD <Jun.Lin@amd.com> * update ck commit Signed-off-by: Linjun-AMD <Jun.Lin@amd.com> * Update mha_batch_prefill_kernels.cu --------- Signed-off-by: Linjun-AMD <Jun.Lin@amd.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: solin <bingzhou@amd.com> Co-authored-by: Xin Huang <Xin.Huang@amd.com>
* fix(paps): fix support for multi kheads Signed-off-by: Double Young <yang.yang2@amd.com> * fix(paps): fix reset work_indptr and use empty init in ut Signed-off-by: Double Young <yang.yang2@amd.com> --------- Signed-off-by: Double Young <yang.yang2@amd.com>
…ROCm#1762) * [Docs] Add README for Triton Ops detailing general maintenance points
* initial commit * fix * test ck tile tuning * temp save * tem save * refactor * fix tile * support ck tile abquant * fix error * fix error * fix error * fix error * fix error * test tuning * fix tile compile error * add more tile instance * test tile instance tuning * add more valid instances * fix test bug * fix default tile instance * fix * fix actions error * format code style * Apply Black 25.12.0 formatting to match CI * fix CI * fix CI * rename lagacy * add profile result * update ck * code format * fix mismatch ck kernel * fix CI * delete tune flag * update ck * merge aiter main branch
* Testing fake_tensor fix * Same logic for var len attn * Fix --------- Co-authored-by: Lingpeng Jin <103567126+valarLip@users.noreply.github.com>
…ernally (ROCm#1821) * Implement a new api that will be switching between asm and hip pa Inference engines should be calling paged_attention_common now with shuffled kv cache layout and aiter internally will decide between asm or hip kernel. HIP is more performant for lower concurrencies ( < 128). Also a unit test has been updated to include the new interface. Note that support for the shuffled scales in HIP is not supported and is always redirected to asm now when KV cache is in int8 or fp8 formats. * Delete op_tests/README_pa_merged_tests.md * Delete op_tests/test_pa_merged.py * Fix formatting according to Black requirements * Fix one last place with broken formatting * Remove modification to pa_v1, we already have pa for 5D kv cache * Fix another formatting issue * Add proper quant support for the common API * Apply formatting * Remove redundant parameters * Remove redundant parameters --------- Co-authored-by: Sergey Solo <ssolovye@amd.com> Co-authored-by: Mikko Tukiainen <mikko.tukiainen@amd.com>
* add_tune_dsfp4_gemm * update * Update aiter/jit/core.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Fused rope_kv and bmm * Apply suggestion from @github-actions[bot] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Apply suggestion from @github-actions[bot] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Update fused_bmm_rope_kv_cache.py * Update fused_bmm_rope_kv_cache.py * add test * update * update * parse bmm config * fp8 API and kernel change * fp8 UT * Apply suggestion from @github-actions[bot] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Apply suggestion from @github-actions[bot] Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Formatting with black * pytest skip if fp4/8 is not avail on device * code format with black --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: ShaoChunLee <Shao-Chun.Lee@amd.com>
TODO: improve moe tuner.
* fix accuracy issue on triton paged_pa_mqa Signed-off-by: ganyi <ygan@amd.com> * add int64 annotation for input stride Signed-off-by: ganyi <ygan@amd.com> --------- Signed-off-by: ganyi <ygan@amd.com>
* Fix code style after updating Black to 26.1.0 * Update aiter/ops/mha.py Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
…x950 (ROCm#1857) * first commit for mla prefill * add test for mla prefill * support ut and mla python reduce verison * support reduce in torch and triton * push 350 .co file * add triton reduce op and reconstruct mla dispatch * first commit for mla prefill * support ut and mla python reduce verison * support reduce in torch and triton * push 350 .co file * fix gqa bug * first commit for mla prefill * support ut and mla python reduce verison * support reduce in torch and triton * push 350 .co file * rebase and force * use hip reduce for test * change ut * support triton reduce kernel without head info * feat(ps): add host v1_2 generate_ps_metadata & ut for mla_prefill Signed-off-by: Double Young <yang.yang2@amd.com> * fix(ps): fix ps metadata allocation * fix triton reduce fallback error * fix(ps): fix OOM in mla prefill Signed-off-by: Double Young <yang.yang2@amd.com> * fix(ps): reduce pre-allocation Signed-off-by: Double Young <yang.yang2@amd.com> * test(mla_prefill): enhance mla_prefill_ps ut & generate_ps_metadata Signed-off-by: Double Young <yang.yang2@amd.com> * slove asm_mla.cu conflict and add conflict co in csv * format code * reformat * test(mla_prefill): fix reduce perf measure and format Signed-off-by: Double Young <yang.yang2@amd.com> * refactor(mla_prefill): fix ruff format Signed-off-by: Double Young <yang.yang2@amd.com> * refactor(mla_prefill): fix ruff format Signed-off-by: Double Young <yang.yang2@amd.com> * fix(mla_prefill): fix nan in sp3 Signed-off-by: Double Young <yang.yang2@amd.com> * fix new pr rename .co and update cu and csv --------- Signed-off-by: Double Young <yang.yang2@amd.com> Co-authored-by: ZhangLirong-amd <lirzhang@amd.com> Co-authored-by: ZhangLirong-amd <Lirong.Zhang@amd.com>
… improvement 18% across all configs (ROCm#1869)
* opt_unit_test * remove test_gemm_a8w8_blockscale_mi350.py
* spkil mla_prefill_ps when gfx942 * use chip info
* Causal conv1d triton sglang
…rage speedup on MI300X (ROCm#1879)
* Upload wheel to S3 in CI test workflow * Reuse wheel from image build instead of rebuilding * Restrict wheel upload to main branch only * Simplify wheel step conditions
A8W8 is missing support for splitk. Added support and a unit test.
If the `git` binary is missing, `FileNotFoundError` is raised in place of `subprocess.CalledProcessError`, causing a crash on import.
* opt prefill * fix atomic mode bugs * Apply suggestions from code format review Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * update moe tunner stride * format * format * update flydsl tunner * update flydsl tunner config --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
* revert to unfused quant kernels for perf * int64 offsets to avoid bhsd overflow of int32
* support activation input in mxfp4 format. * support activation input in mxfp4 format.
* add hipblaslt error log * Update gradlib/csrc/hipbsolgemm.cu Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…Cm#2259) Add 4 missing fields in fmha_fwd_args aggregate initialization in mha_fwd.cu to match new CK struct layout: - seqstart_v_scale_ptr (nullptr) - stride_q_descale, stride_k_descale, stride_v_descale (0)
ROCm#2167) * Prepare repository for size optimization This commit introduces safeguards and documentation to prepare for a major repository cleanup that will reduce the repo size from 547 MB to ~130 MB (76% reduction). Changes: - Enhanced .gitignore to prevent large files (test data, build artifacts) - Created test data download script framework - Documented cleanup plan and migration process The actual history cleanup will be performed separately during a scheduled maintenance window, requiring all contributors to re-clone. See REPO_CLEANUP_PLAN.md for full details. Impact: No immediate changes to functionality. Protective measures only. * Address Copilot code review feedback - Fix migration steps to preserve local changes using patch files instead of git stash - Update size reduction numbers to match actual test results (105MB vs aspirational 50MB) - Clarify that pre-commit hook for size checks is not included (to avoid conflict with existing hook) - Update hook installation instructions to align with existing CONTRIBUTE.md workflow - Fix test data download script to exit with error code when unconfigured - Remove references to non-existent files (paths_to_remove.txt, aiter_cleanup_results.md) All changes address feedback from Copilot code review. * docs: add documentation website Add comprehensive Sphinx-based documentation website for AITER. Features: - Installation guide with 3 installation methods - Quick start tutorial with runnable examples - API reference for attention, GEMM, and operators - Basic usage tutorial with performance comparisons - Configuration for doc.aiter.amd.com hosting Structure: - docs/conf.py: Sphinx configuration with AMD branding - docs/index.rst: Main documentation landing page - docs/installation.rst: Detailed installation instructions - docs/quickstart.rst: 5-minute getting started guide - docs/api/: Complete API reference documentation - docs/tutorials/: Hands-on tutorials with code examples The documentation can be built locally with: cd docs && pip install -r requirements.txt && make html This brings AITER documentation quality on par with FlashInfer. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * ci: add GitHub Actions workflow for documentation Add automated build and deployment workflow for Sphinx documentation. Features: - Automatic build on push to docs-website and main branches - Deploys to GitHub Pages via gh-pages branch - Build artifacts available for PR previews - Uses sphinx-build with all extensions - Caches pip dependencies for faster builds Workflow: 1. Checkout code 2. Install Python and dependencies 3. Build Sphinx HTML documentation 4. Upload build artifacts 5. Deploy to gh-pages branch (on push) Documentation will be available at: https://sunway513.github.io/aiter/ Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * ci: trigger documentation workflow * fix: trigger workflow on docs-website branch * fix: add missing _static and _templates directories for Sphinx * fix: simplify Sphinx build to avoid treating warnings as errors * docs: add comprehensive 'How to Add a New Operator' tutorial Add detailed step-by-step guide for adding custom operators to AITER. Features: - Complete workflow from Python interface to ROCm kernel - Real code examples for each step - PyBind11 bindings setup - Testing and benchmarking guidelines - Best practices and debugging tips - Complete RMSNorm example as reference This addresses team feedback: "搞个how to add new op之类的就完美了" Includes: - Step 1: Define operator interface (Python) - Step 2: Implement ROCm/HIP kernel - Step 3: Create PyBind11 bindings - Step 4: Update build configuration - Step 5: Add comprehensive tests - Step 6: Build and install - Step 7: Register in main module Advanced topics: - CK (Composable Kernel) integration - Triton kernel development - Fused operations pattern - In-place operations - Autograd support for training Also updated: - docs/index.rst: Added Quick Links section highlighting the tutorial - docs/tutorials/index.rst: Added to Advanced Topics section Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * docs: fix critical factual errors in documentation Fix high-priority documentation errors discovered in factual accuracy audit: ## Critical Fixes - Fix incorrect package name (aiter → amd-aiter) in installation instructions - Replace non-working verification code with functional examples - Fix MOE quickstart example to use actual fmoe() API instead of non-existent grouped_gemm() ## Changes - docs/installation.rst: Update pip install command and verification code - docs/quickstart.rst: Replace grouped_gemm with working fmoe example - docs/DOCUMENTATION_AUDIT_REPORT.md: Add comprehensive audit findings ## Audit Summary Discovered 22 factual errors across documentation. This commit addresses the 3 highest-priority issues that would immediately block users. See DOCUMENTATION_AUDIT_REPORT.md for complete findings and recommendations. Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * ci: add path filters to docs workflow Only trigger docs build/deploy when docs/** or the workflow file itself is modified, avoiding unnecessary CI runs on non-doc changes. Addresses review feedback from @gyohuangxin. * fix: apply black formatting to docs/conf.py Replace single quotes with double quotes and add trailing commas to pass CI code style check. --------- Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com> Co-authored-by: Xin Huang <Xin.Huang@amd.com> Co-authored-by: valarLip <340077269@qq.com>
* opt kernel when batch>=32 and add more decode test * Apply suggestions from code review Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * add max_tokens_per_batch interface to support more Scenarios * Update test_fused_qk_norm_rope_cache_quant.py * Apply suggestions from code review Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
🏷️ CI GuideRuns automatically on every PR:
Extended tests (opt-in via labels):
|
tjtanaa
reviewed
Mar 16, 2026
tjtanaa
reviewed
Mar 16, 2026
tjtanaa
reviewed
Mar 19, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Motivation
The RMSNorm kernels in
csrc/kernels/rmsnorm_quant_kernels.cuuse CDNA-specific inline assembly instructions that are not supported on RDNA4 (gfx1201) architecture. This prevents the RMSNorm operation from working on gfx1201 GPUs. This PR aims to enable RMSNorm support on gfx1201 by replacing unsupported assembly instructions with portable HIP/C++ alternatives.Technical Details
Changes Overview
Modified
csrc/kernels/rmsnorm_quant_kernels.cuto replace CDNA-specific inline assembly with portable implementations for gfx11/gfx12:Replaced
v_pk_mul_f32inline assembly (lines 146-151, 196-201)asm volatile("v_pk_mul_f32 %0, %1, %2" ...)__gfx11__ || __gfx12__guardReplaced bf16 unpacking inline assembly (lines 162-176)
v_lshlrev_b32_e32andv_and_b32_e32instructionsck_tile::bit_castwith shift operations for unpacking bf16 valuesReplaced fp16 unpacking inline assembly (lines 180-194)
v_cvt_f32_f16_e32andv_cvt_f32_f16_sdwainstructionsck_tile::bit_castwith shift operations for unpacking fp16 valuesCompatibility
#if defined(__gfx11__) || defined(__gfx12__)) to ensure optimal performance on both architecturesTest Plan
Run the RMSNorm test suites to validate the changes:
Tests cover:
Test Result
Test result for
python op_tests/test_rmsnorm2dFusedAddQuant.py --mode 7 -q fp8Submission Checklist