fix: use run_in_executor in LLMAgent.astep() to avoid blocking the ev…#141
fix: use run_in_executor in LLMAgent.astep() to avoid blocking the ev…#141khansalman12 wants to merge 1 commit intomesa:mainfrom
Conversation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #141 +/- ##
==========================================
+ Coverage 89.26% 89.89% +0.62%
==========================================
Files 19 19
Lines 1472 1474 +2
==========================================
+ Hits 1314 1325 +11
+ Misses 158 149 -9 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
7b28164 to
4a66664
Compare
4a66664 to
63b305c
Compare
Summary
astep() in LLMAgent was calling
self.step()synchronously when a subclass only defines step(), blocking the event loop during parallel stepping.Bug / Issue
Fixes #140
When astep() falls back to
self.step()directly, any blocking LLM API call inside step() freezes the entire asyncio event loop. Other agents cannot run until the current one completes, making parallel execution no faster than sequential.Implementation
In llm_agent.py, replaced the direct blocking call: