Hello! I noticed that your repository includes performance comparisons with the Triton implementation of Native Sparse Attention (NSA) from https://github.com/XunhaoLai/native-sparse-attention-triton.
In my experiments with the NSA reference implementation, I found that a simple adjustment to the grid order (for example, changing the grid from (batch_size, num_k_heads, triton.cdiv(max_seqlen_q, num_q_loop)) to (triton.cdiv(max_seqlen_q, num_q_loop), num_k_heads, batch_size)) results in approximately a 1.3x speedup.
I'm curious whether the authors considered the impact of grid order on performance during your benchmarking process. Have these implementation details been optimized in the current comparison results provided in the repository?
Thank you for your valuable open-source contribution!