Optimize reduction stage of dot product of q4_L/q5_K to q8_K on AVX2#22181
Open
nariox wants to merge 1 commit intoggml-org:masterfrom
Open
Optimize reduction stage of dot product of q4_L/q5_K to q8_K on AVX2#22181nariox wants to merge 1 commit intoggml-org:masterfrom
nariox wants to merge 1 commit intoggml-org:masterfrom
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Overview
This PR optimizes the reduction stage of the dot product kernels for AVX2, specifically targeting the logic used in Q4_K/Q8_K and Q5_K/Q8_K interactions (i.e.,
ggml_vec_dot_q4_K_q8_Kandggml_vec_dot_q5_K_q8_K).The primary change is replacing the SSSE3 instruction for horizontal add instruction _mm_hadd_epi16 (VPHADDW) with the AVX2 "equivalent" approach using _mm256_madd_epi16 and specialized shuffles. When running a large model on my CPU (
qwen3.6-35b-a3bwithq4_Kweights andq4_0kv cache), I noticed that these functions (ggml_vec_dot_q4_K_q8_Kandggml_vec_dot_q5_K_q8_Ktaking a substantial amount of time in my processor (7.45% and 4.66% respectively) and when looking at theperf topannotations, it seemed thatvphaddwwas taking a big chunk of those operations as well (9.01% on the q4_k one). Although other functions consumed more time, I felt this seemed like a low hanging fruit since AVX2 offers better performing alternatives for operations like this reduction.Key Technical Changes:
Performance Impact:
Profiling via perf on modern AVX2 hardware showed that the. With this patch, end-to-end inference speed improved by a bit (from 14.5-15.2 t/s to 15.8-16.2 t/s on my machine.
Additional information
Requirements
I have read and agree with the contributing guidelines
AI usage disclosure: YES. Although I have generally kept up with the theoretical advances in SIMD instruction sets, I have not done much in terms of low level SIMD programming myself since 2013 (CUDA and SSE2). I have asked an LLM to help me generate a snippet for executing these instructions. But I inspected the functions to make sure they were bit-level equal to the original and test ran the llm afterwards.