-
Notifications
You must be signed in to change notification settings - Fork 18
Description
when I finish training and got the lora file
/home/work/Flow-Factory/flux2-klein_lora_grpo_20260315_233655/checkpoints/checkpoint-60/adapter_model.safetensors
and use the code to load lora
pipe.load_lora_weights(lora_path, adapter_name="current_lora")
Here comes the errors:
File "/home/work/tmp/flux2klein_lora.py", line 31, in load_lora
pipe.load_lora_weights(lora_path, adapter_name="current_lora")
File "/home/work/anaconda3/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py", line 5713, in load_lora_weights
state_dict, metadata = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs)
File "/home/work/anaconda3/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/work/anaconda3/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py", line 5682, in lora_state_dict
state_dict = _convert_non_diffusers_flux2_lora_to_diffusers(state_dict)
File "/home/work/anaconda3/lib/python3.10/site-packages/diffusers/loaders/lora_conversion_utils.py", line 2435, in _convert_non_diffusers_flux2_lora_to_diffusers
raise ValueError(f"original_state_dict should be empty at this point but has {original_state_dict.keys()=}.")
ValueError: original_state_dict should be empty at this point but has original_state_dict.keys()=dict_keys(['single_transformer_blocks.0.attn.to_qkv_mlp_proj.lora_A.weight', 'single_transformer_blocks.0.attn.to_qkv_mlp_proj.lora_B.weight', 'single_transformer_blocks.1.attn.to_qkv_mlp_proj.lora_A.weight', 'single_transformer_blocks.1.attn.to_qkv_mlp_proj.lora_B.weight', 'single_transformer_blocks.10.attn.to_qkv_mlp_proj.lora_A.weight', 'single_transformer_blocks.10.attn.to_qkv_mlp_proj.lora_B.weight', 'single_transformer_blocks.11.attn.to_qkv_mlp_proj.lora_A.weight', 'single_transformer_blocks.11.attn.to_qkv_mlp_proj.lora_B.weight', ...
Name: diffusers
Version: 0.38.0.dev0
Name: transformers
Version: 4.57.6