fix: raise clear error when non-LLM model is used with TextGenerate node#13291
fix: raise clear error when non-LLM model is used with TextGenerate node#13291octo-patch wants to merge 1 commit intoComfy-Org:masterfrom
Conversation
…ode (fixes Comfy-Org#13286) When a user connects a CLIP text encoder (e.g. CLIPTextModel) to the TextGenerate node instead of a language model (LLM), the previous behavior was an unhelpful AttributeError. Now a RuntimeError is raised with a clear explanation of what model type is required.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughThe change adds a capability check to the 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Fixes #13286
Problem
When a user connects a CLIP text encoder model (e.g.
CLIPTextModel) to theTextGeneratenode instead of a language model (LLM), the node fails with a crypticAttributeError: 'CLIPTextModel' object has no attribute 'generate'. This happens because standard CLIP models only support text encoding (for embeddings), not text generation.Solution
Added an explicit check in
CLIP.generate()to verify that the underlyingcond_stage_modelhas ageneratemethod before attempting to call it. If the model doesn't support generation, aRuntimeErroris raised with a clear, actionable error message informing the user that they need to use a language model (LLM) like Qwen, LLaMA, or Gemma.Testing
generatemethod