Pass block table pointer instead of 64 individual scalar args#321
Conversation
Replace add_scalars_i32(bt_base + bn, N_UNROLL) with a single add_scalar of the device pointer. AICore kernels now read block indices directly from GM via the pointer instead of unpacking 64 scalars from the args array. Reduces QK/PV task args from 68 to 5 per submission. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request refactors the paged attention unroll orchestration and associated AICore kernels to optimize how block table information is passed. Instead of sending 64 individual scalar block indices, the system now passes a single device pointer to the block table, significantly reducing the number of task arguments and simplifying the data transfer mechanism. This change aims to improve efficiency and maintain performance neutrality. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request refactors the paged attention kernels to pass a pointer to the block table instead of 64 individual scalar arguments. This simplifies the orchestration logic by removing the add_scalars_i32 and copy_scalars_from calls, and reduces the number of arguments passed to the kernels. The changes in the AICore kernels correctly adapt to read the block indices from the pointer. My feedback includes a suggestion to further optimize performance by pre-loading the block indices from global memory into a local array within the kernels to avoid repeated GMEM accesses in loops.
| __gm__ float* oi_base, | ||
| uint64_t n_blocks, | ||
| uint64_t* block_indices) { | ||
| __gm__ int32_t* block_table) { |
There was a problem hiding this comment.
Similar to the qk_matmul kernel, this function now accesses block_table directly from global memory. This happens at line 75 (block_table[0]) and inside the loop at line 112 (block_table[i + 1]).
To avoid potential performance degradation from repeated GMEM reads, you could pre-load the necessary block indices into a local array at the start of the function. This would consolidate GMEM access into a single burst read.
Example:
// At the top of the file:
constexpr int kMaxBlocks = 64;
// In pv_matmul_n_impl:
int32_t local_block_table[kMaxBlocks];
for (uint64_t i = 0; i < n_blocks; ++i) {
local_block_table[i] = block_table[i];
}
// Then use local_block_table for accesses.
GlobalB vjGlobal_0(val_base + local_block_table[0] * K * N);
// ... and in the loop ...
GlobalB vjGlobal_next(val_base + local_block_table[i + 1] * K * N);This change would make the kernel more robust to memory latency variations.
| __gm__ float* sij_base, | ||
| uint64_t n_blocks, | ||
| uint64_t* block_indices) { | ||
| __gm__ int32_t* block_table) { |
There was a problem hiding this comment.
While passing the block_table pointer is a great simplification for the orchestration, accessing it directly from global memory inside the loop at line 70 (block_table[i]) might introduce performance overhead due to repeated GMEM reads.
Consider pre-loading the block indices into a stack-allocated local array at the beginning of this function. This would perform a single burst read from GMEM and subsequent accesses within the loop would be much faster.
For example:
// At the top of the file, you could define:
constexpr int kMaxBlocks = 64;
// Then, inside qk_matmul_n_impl:
int32_t local_block_table[kMaxBlocks];
// A simple loop or a memcpy-like instruction could be used to load the data.
for (uint64_t i = 0; i < n_blocks; ++i) {
local_block_table[i] = block_table[i];
}
// And in the main loop:
for (uint64_t i = 0; i < n_blocks; i++) {
GlobalB kjGlobal(key_base + local_block_table[i] * N * K);
// ...
}Although you've noted performance is neutral, this is a good practice that could yield benefits, especially if n_blocks were larger or if memory access patterns change.
Summary
add_scalars_i32(bt_base + bn, N_UNROLL)with a singleadd_scalarof the device pointer in the paged attention unroll orchestrationcopy_scalars_fromdependency between QK and PV paramsTesting