Skip to content

LCM-Lab/LongRM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

6 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿ’ป Environment & Installation

To setup the training environment, run:

cd LongRM
pip install -r requirements.txt
# install flash attention
Download the suitable version of flash_attn from https://github.com/Dao-AILab/flash-attention/releases
pip install <path_to_flash_attn_whl_file>
pip install ring_flash_attn

๐Ÿ”ฅ Train

For Generative Model

To run the first training process:

bash scripts/sft.sh 

To run the second training process:

bash bash scripts/simpo_grm.sh 

For Discriminative Model

Directly run the second training process:

bash bash scripts/simpo_disrm.sh 

๐Ÿ“Š Evaluate

  • We provide the benchmark dataset and trained models in our Modelscope.

For Generative Model

modelscope download LCM_group/LongReward_Qwen3-8B --repo-type model --local_dir ./LongReward_Qwen3-8B

python evaluate/eval.py --model-path ./LongReward_Qwen3-8B --data-path ./LongReward-Bench

For Discriminative Model

modelscope download LCM_group/LongReward_Skywork-Reward-V2-Llama-3.1-8B --repo-type model --local_dir ./LongReward_Skywork-Reward-V2-Llama-3.1-8B

python evaluate/eval.py --model-path ./LongReward_Skywork-Reward-V2-Llama-3.1-8B --data-path ./LongReward-Bench --is-disrm

About

Revealing and unlocking the context boundary of reward models

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •