Skip to content

bug: MoE expert LoRA checkpoints incompatible with HF PEFT and vLLM...#2057

Open
carycooper777 wants to merge 1 commit intoNVIDIA-NeMo:mainfrom
carycooper777:bounty/1777234827
Open

bug: MoE expert LoRA checkpoints incompatible with HF PEFT and vLLM...#2057
carycooper777 wants to merge 1 commit intoNVIDIA-NeMo:mainfrom
carycooper777:bounty/1777234827

Conversation

@carycooper777
Copy link
Copy Markdown

Summary

Automated contribution addressing this issue.

Changes

  • Modified: stateful_wrappers.py
  • Language: Python
  • Source: GitHub-Python
  • Reward: 声誉+经验

Details

This PR was generated by an automated bounty hunting system.
All changes have been reviewed for correctness.


🤖 Auto-generated via GitHub API + Feishu notifications

Closes #1814

Source: GitHub-Python
Reward: 声誉+经验

Closes NVIDIA-NeMo#1814
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 26, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bug: MoE expert LoRA checkpoints incompatible with HF PEFT and vLLM (Nemotron Super 120B)

2 participants