Skip to content

Update: narrow auto_incore scope and switch to Ascend950 in Qwen3 decode#18

Merged
zhangqi-chen merged 1 commit intohw-native-sys:mainfrom
lyfne123:main
Mar 17, 2026
Merged

Update: narrow auto_incore scope and switch to Ascend950 in Qwen3 decode#18
zhangqi-chen merged 1 commit intohw-native-sys:mainfrom
lyfne123:main

Conversation

@lyfne123
Copy link
Contributor

@lyfne123 lyfne123 commented Mar 17, 2026

  • Move pl.auto_incore() inside the batch loop to wrap only the KV-head loop (RoPE + cache update) and attention computation
  • Simplify batch loop to pl.parallel(BATCH_CFG)
  • Switch backend from Ascend910B_PTO to Ascend950
  • Remove save_kernels flag from compile_and_run

Summary by CodeRabbit

  • Refactor

    • Restructured parallel batch processing implementation in example code to improve execution efficiency.
    • Simplified cache assembly and data extraction configurations for better maintainability.
  • Chores

    • Updated hardware backend support from Ascend910B_PTO to Ascend950 for enhanced compatibility.

- Move `pl.auto_incore()` inside the batch loop to wrap only the
  KV-head loop (RoPE + cache update) and attention computation
- Simplify batch loop to `pl.parallel(BATCH_CFG)`
- Switch backend from Ascend910B_PTO to Ascend950
- Remove save_kernels flag from compile_and_run
@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on optimizing the Qwen3 decode layer by refining its execution strategy and adapting it for a new hardware platform. The changes aim to improve performance and compatibility through more precise memory management, streamlined batch processing, and an updated target backend for compilation.

Highlights

  • Auto-incore scope: The pl.auto_incore() context manager's scope was narrowed, moving it inside the batch loop to wrap only the KV-head loop (RoPE + cache update) and attention computation.
  • Batch loop simplification: The batch loop was simplified from pl.parallel(0, BATCH_CFG, 1, chunk=4) to pl.parallel(BATCH_CFG).
  • Backend switch: The compilation backend was switched from Ascend910B_PTO to Ascend950.
  • Save kernels flag removal: The save_kernels flag was removed from the compile_and_run function call.
Changelog
  • examples/qwen3_32b_decode.py
    • Refactored the pl.auto_incore() context manager to apply to a more specific loop.
    • Simplified the pl.parallel batch loop syntax.
    • Streamlined pl.assemble and pl.slice function calls for improved readability.
    • Updated the compile_and_run function to target the Ascend950 backend.
    • Removed the save_kernels argument from the compile_and_run function.
Activity
  • No human activity has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link

coderabbitai bot commented Mar 17, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4b31b66e-5602-4afd-9aa4-2a2f436c3036

📥 Commits

Reviewing files that changed from the base of the PR and between ab85f17 and 278c94b.

📒 Files selected for processing (1)
  • examples/qwen3_32b_decode.py

📝 Walkthrough

Walkthrough

A single example file for Qwen3 32B decoding was refactored to restructure control flow: the pl.auto_incore() scope was moved from wrapping the batch loop to wrapping the inner KV-head loop, and the batch-parallel loop structure was flattened. Code formatting was simplified and the backend type was updated from Ascend910B_PTO to Ascend950.

Changes

Cohort / File(s) Summary
Qwen3 32B Decode Example
examples/qwen3_32b_decode.py
Refactored pl.auto_incore() scope placement from wrapping the batch-parallel loop to wrapping only the inner KV-head loop and cache updates. Simplified code formatting for k_cache assembly and k_tile/v_tile extraction to single-line forms with inline parameters. Updated backend configuration: removed save_kernels=True and changed backend_type from Ascend910B_PTO to Ascend950.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

🐰 A rabbit hops through scopes so bright,
Reshuffling auto_incore's might,
The KV-heads now dance in line,
While Ascend950 starts to shine!
Code flows cleaner, trim and spry, 🎀

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main changes: narrowing auto_incore scope and switching to Ascend950 backend in the Qwen3 decode example.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the qwen3_decode_layer function to narrow the auto_incore scope, simplifies a parallel loop, and updates the backend configuration. The changes align with the PR description and appear to be a good optimization. I have one comment regarding the removal of a chunk parameter which might have performance implications.

sin_lo = pl.slice(sin_row, [1, HEAD_DIM_CFG // 2], [0, 0])
sin_hi = pl.slice(sin_row, [1, HEAD_DIM_CFG // 2], [0, HEAD_DIM_CFG // 2])

for b in pl.parallel(BATCH_CFG):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The chunk=4 parameter was removed from this pl.parallel call. While pl.parallel(BATCH_CFG) is a valid simplification, this is now the only pl.parallel call in the function that doesn't specify a chunk size. Given that chunking is often important for performance tuning, was its removal here intentional and verified? If the chunking is still desired, you might need to revert to the explicit form: pl.parallel(0, BATCH_CFG, 1, chunk=4).

@zhangqi-chen zhangqi-chen merged commit 9bdec9e into hw-native-sys:main Mar 17, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants