Skip to content

Fix MoE dispatch for Quark W4A6 models (MXFP4 weights with QuantType.No)#2318

Open
vecheruk-amd wants to merge 1 commit intoROCm:mainfrom
vecheruk-amd:fix/quark-w4a6-mxfp4-compat
Open

Fix MoE dispatch for Quark W4A6 models (MXFP4 weights with QuantType.No)#2318
vecheruk-amd wants to merge 1 commit intoROCm:mainfrom
vecheruk-amd:fix/quark-w4a6-mxfp4-compat

Conversation

@vecheruk-amd
Copy link

Motivation

W4A6 models store MoE weights in MXFP4 (fp4x2 dtype), but use MXFP6 for activation quantization. Because the Quark quantization scheme handles activation quantization separately, it passes QuantType.No to the AITER CK-based fused MoE kernel. However, the CK kernel only supports A4W4 (both activations and weights in fp4) — there is no codepath for bf16 activations with fp4x2 weights.

Technical Details

After the existing quant_remap lookup in ck_moe_2stages (and the equivalent in ck_moe_2stages_dp), detect the unsupported combination of QuantType.No with fp4x2 weights and remap to QuantType.per_1x32. This ensures activations are dynamically quantized to fp4x2 at runtime, matching what the CK kernel expects.

Test Plan

Test Result

Verified with ziliangpeng/DeepSeek-V3-Quark-MXFP4-v4-w4a6 on MI355X (gfx950) / ROCm 7.2 / vLLM 0.17.1. MoE layers execute successfully in both eager and compile modes. Results of the experiment can be found here: https://github.com/AMD-AGI/di-recipes/blob/main/tools/prompt_replay/baselines/DeepSeek-V3-0324/MI355/serve_dsr1_0528_mxfp4-v4-w4a6_20260316_smci355-ccs-aus-m15-13.cs-aus.dcgpu.log

Submission Checklist

@vecheruk-amd vecheruk-amd requested a review from a team March 18, 2026 05:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant