-
-
Notifications
You must be signed in to change notification settings - Fork 767
feat: implement ReasoningAgent and DualBrainAgent with advanced reasoning capabilities #977
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…ning capabilities - Add ReasoningConfig class for configurable reasoning parameters - Add ActionState enum for flow control - Implement ReasoningAgent inheriting from Agent with: - Step-by-step reasoning with confidence scoring - Reasoning trace tracking - Configurable min/max steps and reasoning styles - Implement DualBrainAgent inheriting from Agent with: - Separate LLMs for conversation and reasoning - Dual-brain coordination for optimal problem-solving - Brain status monitoring and model switching - Add confidence scoring integration - Maintain backward compatibility with existing Agent class - Export new classes in agent module Addresses issue #968: Create ReasoningAgent inherited from Agent class 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
📝 WalkthroughWalkthroughThis PR introduces reasoning capabilities to the PraisonAI agents framework by adding ReasoningAgent and DualBrainAgent classes, along with supporting infrastructure (ActionState, ReasoningConfig, ReasoningStep, ReasoningTrace, ReasoningFlow) in a new reasoning module. Both agents and the reasoning components are exposed through the package public API. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant DBA as DualBrainAgent
participant ReasoningLLM
participant MainLLM
User->>DBA: chat(message)
DBA->>DBA: Switch to reasoning_llm
DBA->>ReasoningLLM: Send problem for analytical breakdown
ReasoningLLM-->>DBA: Return decomposition, analysis, insights, confidence
DBA->>DBA: Store reasoning in ReasoningTrace
DBA->>DBA: Switch to main_llm
DBA->>MainLLM: Generate response with analytical insights
MainLLM-->>DBA: Return final response
DBA->>DBA: Update trace with response timing
DBA-->>User: Return final response + reasoning metadata
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances the agent framework by integrating advanced reasoning capabilities. It introduces two new specialized agent types, ReasoningAgent and DualBrainAgent, designed to tackle complex problems through structured thought processes and multi-model coordination. A new reasoning module underpins these agents, providing a robust framework for configurable reasoning, step-by-step analysis, confidence scoring, and flow control, ensuring more intelligent and traceable agent behavior.
Highlights
- New Agent Implementations: I've added two new agent classes,
ReasoningAgentandDualBrainAgent, to introduce advanced reasoning capabilities into the system. These agents are designed to handle complex problem-solving more effectively. - Enhanced Reasoning Framework: A new
reasoningmodule has been introduced, providing core components such asReasoningConfigfor configurable parameters,ActionStatefor flow control,ReasoningStepfor individual steps, andReasoningTracefor tracking the entire reasoning process, including confidence scoring. - Dual-Brain Architecture: The
DualBrainAgentspecifically implements a 'dual-brain' approach, utilizing separate Large Language Models (LLMs) for conversational responses and analytical reasoning. This allows for more specialized and coordinated problem-solving by leveraging the strengths of different models. - Step-by-Step Reasoning: The
ReasoningAgentenables agents to perform step-by-step reasoning, track their internal thought processes, and assess confidence levels for each step. This enhances transparency and provides greater control over how complex tasks are approached and solved.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces the ReasoningAgent and DualBrainAgent classes, adding advanced reasoning capabilities. The implementation of the data models in reasoning.py is well-structured. The main concerns are that the ReasoningAgent doesn't fully implement the step-by-step reasoning parsing, and the DualBrainAgent has a potential thread-safety issue. Addressing these, along with adding unit tests, will improve the robustness of this feature.
| def chat( | ||
| self, | ||
| message: str, | ||
| **kwargs | ||
| ) -> str: | ||
| """ | ||
| Enhanced chat method with reasoning capabilities. | ||
|
|
||
| Args: | ||
| message: Input message | ||
| **kwargs: Additional chat parameters | ||
|
|
||
| Returns: | ||
| Response with reasoning trace | ||
| """ | ||
| # Start reasoning trace | ||
| self.start_reasoning_trace(message) | ||
|
|
||
| # Enhance message with reasoning instructions | ||
| enhanced_message = f""" | ||
| {message} | ||
|
|
||
| Please solve this step-by-step using the following reasoning process: | ||
| 1. Break down the problem into logical steps | ||
| 2. For each step, show your thought process | ||
| 3. State your confidence level (0.0-1.0) for each step | ||
| 4. Ensure minimum {self.reasoning_config.min_steps} reasoning steps | ||
| 5. Use {self.reasoning_config.style} reasoning style | ||
| 6. Provide a clear final answer | ||
|
|
||
| Format your response to show each reasoning step clearly. | ||
| """ | ||
|
|
||
| # Call parent chat method | ||
| response = super().chat(enhanced_message, **kwargs) | ||
|
|
||
| # Complete reasoning trace | ||
| self.complete_reasoning_trace(response) | ||
|
|
||
| return response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The chat method instructs the LLM to perform step-by-step reasoning but does not parse the response to create ReasoningStep objects, so the reasoning_trace.steps list remains empty. Define a structured format (e.g., JSON) for the LLM to return reasoning steps, update the prompt to request the output in that format, and parse the LLM's response to populate the trace.
| try: | ||
| # Switch to reasoning LLM | ||
| self.llm = self.reasoning_llm | ||
|
|
||
| # Use parent chat method with reasoning LLM | ||
| reasoning_result = super().chat(reasoning_prompt) | ||
|
|
||
| return reasoning_result | ||
|
|
||
| finally: | ||
| # Restore original LLM | ||
| self.llm = original_llm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The _reason_with_analytical_brain method modifies the instance attribute self.llm, which is not thread-safe. If chat() is called concurrently, this could lead to race conditions. Consider passing the LLM configuration directly to the chat completion method or creating a temporary, isolated client for the reasoning call.
| self.reasoning_trace.overall_confidence = sum( | ||
| step.confidence for step in self.reasoning_trace.steps | ||
| ) / len(self.reasoning_trace.steps) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The calculation for overall_confidence can result in a ZeroDivisionError if self.reasoning_trace.steps is empty. Add a check to prevent this.
if self.reasoning_trace.steps:
self.reasoning_trace.overall_confidence = sum(
step.confidence for step in self.reasoning_trace.steps
) / len(self.reasoning_trace.steps)
else:
self.reasoning_trace.overall_confidence = 0.0| main_llm = llm_config.get('model', llm) | ||
| # Apply LLM config parameters as needed | ||
| else: | ||
| main_llm = llm or "gpt-4o" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| if isinstance(reasoning_config, dict) and 'model' in reasoning_config: | ||
| self.reasoning_llm_config.update(reasoning_config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of directly updating self.reasoning_llm_config with the entire reasoning_config dictionary, selectively update only the keys relevant to the LLM configuration (model, temperature, system_prompt). This prevents potential issues if reasoning_config contains unrelated parameters.
if isinstance(reasoning_config, dict):
llm_config_keys = {"model", "temperature", "system_prompt"}
llm_specific_config = {k: v for k, v in reasoning_config.items() if k in llm_config_keys}
if llm_specific_config:
self.reasoning_llm_config.update(llm_specific_config)| confidence=0.9 # High confidence in reasoning LLM analysis | ||
| ) | ||
| self.reasoning_trace.steps.append(reasoning_step) | ||
| self.last_reasoning_steps.append(reasoning_step) | ||
|
|
||
| # Step 2: Use main LLM for response generation | ||
| final_response = self._generate_response_with_main_brain(message, reasoning_analysis) | ||
|
|
||
| # Add response generation step | ||
| response_step = ReasoningStep( | ||
| step_number=2, | ||
| title="Main Brain Response Generation", | ||
| thought=f"Using {self.main_llm} for conversational response", | ||
| action=final_response, | ||
| confidence=0.85 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Incorrect Attribute Access in `reason_step` Function
The reason_step function incorrectly attempts to call .get('steps', []) on agent.reasoning_trace. When agent.reasoning_trace exists, it is a ReasoningTrace Pydantic BaseModel object, not a dictionary. Pydantic models do not have a .get() method, leading to an AttributeError. The steps attribute should be accessed directly from the ReasoningTrace object.
src/praisonai-agents/praisonaiagents/reasoning.py#L117-L119
PraisonAI/src/praisonai-agents/praisonaiagents/reasoning.py
Lines 117 to 119 in 2e5d7cf
| step = ReasoningStep( | |
| step_number=len(getattr(agent, 'reasoning_trace', {}).get('steps', [])) + 1, | |
| title=f"Step {len(getattr(agent, 'reasoning_trace', {}).get('steps', [])) + 1}", |
BugBot free trial expires on July 22, 2025
Learn more in the Cursor dashboard.
Was this report helpful? Give feedback by reacting with 👍 or 👎
|
@claude review this pull request and do a detailed analysis and fix this if the existing code doesn't have the solution implemented. Making sure it has backward compatibility, no existing features removed. After making those changes, again review the applied changes. Use @web to search if you dont know any information or to find the latest documentation or to find the latest version. Run the code if you think you need to run it to test it. Minimal code change to start with if required any changes. |
|
Claude encountered an error —— View job I'll analyze this and get back to you. |
2e5d7cf to
c7fcc1a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@src/praisonai-agents/praisonaiagents/agent/dual_brain_agent.py`:
- Around line 179-214: The _generate_response_with_main_brain method currently
ignores caller options because it doesn't accept or forward **kwargs; update its
signature to def _generate_response_with_main_brain(self, original_query: str,
reasoning_analysis: str, **kwargs) and forward those into the LLM call by
returning super().chat(response_prompt, **kwargs); also find the other analogous
method (the one referenced at lines 215-249) and make the same change so any
temperature/tools/other options passed into chat are threaded through from
callers to the main LLM.
- Around line 35-76: The parameter "reasoning" in DualBrainAgent.__init__ is
unused and causing ARG002; explicitly consume/forward it by setting
kwargs['reasoning'] = reasoning (or otherwise use it) before modifying other
kwargs so the signature is honored and the linter error is resolved; update the
code where kwargs are set (currently setting kwargs['reasoning_steps'],
kwargs['self_reflect'], kwargs['reflect_llm']) to also assign
kwargs['reasoning'] = reasoning.
- Around line 345-372: The configs can keep an old "model" value when a new LLM
is passed with a config that lacks "model": in switch_reasoning_llm and
switch_main_llm, ensure you set the "model" key on the incoming config (e.g.,
config["model"] = new_reasoning_llm or new_main_llm) before merging/updating the
stored config dicts (reasoning_llm_config, main_llm_config); if no config is
provided, continue to set the stored config's "model" to the new model as
currently done. Also keep the existing behaviors that update reflect_llm and llm
when switching reasoning/main models.
In `@src/praisonai-agents/praisonaiagents/agent/reasoning_agent.py`:
- Around line 35-66: The __init__ currently accepts a reasoning parameter but
never uses it (causing Ruff ARG002); to keep API compatibility, explicitly
consume/forward it by assigning it into kwargs (e.g., set kwargs['reasoning'] =
reasoning) alongside the existing kwargs modifications (see __init__ and the
existing kwargs['reasoning_steps'] and kwargs['self_reflect'] assignments), so
the parameter is used and the lint error is resolved.
- Around line 92-112: _enhance_instructions_for_reasoning currently always
injects "Show your thinking process explicitly" regardless of the agent config;
update it to read the show_internal_thoughts flag (e.g.,
self.reasoning_config.show_internal_thoughts) and conditionally include either
the explicit chain-of-thought guidance or a privacy-safe alternative (e.g., "Do
not reveal internal chain-of-thought; provide concise step summaries and final
answer") in the reasoning_guidance string; keep all other fields (min_steps,
max_steps, style, min_confidence) and the existing logic that appends to
self.instructions intact so the prompt respects the flag.
In `@src/praisonai-agents/praisonaiagents/reasoning.py`:
- Around line 96-129: The step-counting uses getattr(agent, 'reasoning_trace',
{}).get('steps', []) which breaks when reasoning_trace is None or a
ReasoningTrace object; in reason_step, first retrieve trace = getattr(agent,
'reasoning_trace', None) and normalize it to a sequence: if trace is None set
steps_list = [], elif isinstance(trace, dict) use trace.get('steps', []), elif
hasattr(trace, 'steps') use getattr(trace, 'steps') (or list(trace.steps)), elif
isinstance(trace, list) use it directly; then compute step_number =
len(steps_list) + 1 and use that for ReasoningStep.step_number and the title to
avoid calling .get on non-dict types.
🧹 Nitpick comments (1)
src/praisonai-agents/praisonaiagents/agent/__init__.py (1)
9-20: Optional: sort__all__to satisfy Ruff RUF022.
This is purely stylistic but avoids the lint warning.🧹 Suggested tweak
__all__ = [ - 'Agent', - 'ImageAgent', - 'Handoff', - 'handoff', - 'handoff_filters', - 'RECOMMENDED_PROMPT_PREFIX', - 'prompt_with_handoff_instructions', - 'RouterAgent', - 'ReasoningAgent', - 'DualBrainAgent' + 'Agent', + 'DualBrainAgent', + 'Handoff', + 'ImageAgent', + 'RECOMMENDED_PROMPT_PREFIX', + 'ReasoningAgent', + 'RouterAgent', + 'handoff', + 'handoff_filters', + 'prompt_with_handoff_instructions', ]
| def __init__( | ||
| self, | ||
| name: Optional[str] = None, | ||
| role: Optional[str] = None, | ||
| goal: Optional[str] = None, | ||
| backstory: Optional[str] = None, | ||
| instructions: Optional[str] = None, | ||
| llm: Optional[Union[str, Any]] = None, | ||
| reasoning_llm: Optional[Union[str, Any]] = None, | ||
| reasoning: bool = True, | ||
| reasoning_config: Optional[Union[ReasoningConfig, Dict[str, Any]]] = None, | ||
| llm_config: Optional[Dict[str, Any]] = None, | ||
| **kwargs | ||
| ): | ||
| """ | ||
| Initialize a DualBrainAgent. | ||
|
|
||
| Args: | ||
| name: Agent name | ||
| role: Agent role | ||
| goal: Agent goal | ||
| backstory: Agent backstory | ||
| instructions: Direct instructions | ||
| llm: Main conversational model (e.g., "gpt-4-turbo") | ||
| reasoning_llm: Analytical reasoning model (e.g., "o1-preview") | ||
| reasoning: Enable reasoning capabilities | ||
| reasoning_config: Reasoning configuration or dict | ||
| llm_config: Configuration for main LLM | ||
| **kwargs: Additional Agent parameters | ||
| """ | ||
| # Set up main LLM | ||
| if llm_config and isinstance(llm_config, dict): | ||
| main_llm = llm_config.get('model', llm) | ||
| # Apply LLM config parameters as needed | ||
| else: | ||
| main_llm = llm or "gpt-4o" | ||
|
|
||
| # Force reasoning to be enabled and set reflect_llm | ||
| kwargs['reasoning_steps'] = True | ||
| kwargs['self_reflect'] = kwargs.get('self_reflect', True) | ||
| kwargs['reflect_llm'] = reasoning_llm or "o1-preview" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused reasoning parameter triggers Ruff ARG002.
If it’s for signature compatibility, explicitly consume it (or forward it) to avoid lint errors.
🔧 Suggested fix
- # Set up main LLM
+ # Set up main LLM (keep `reasoning` for signature compatibility)
+ _ = reasoning
if llm_config and isinstance(llm_config, dict):🧰 Tools
🪛 Ruff (0.14.14)
[warning] 44-44: Unused method argument: reasoning
(ARG002)
🤖 Prompt for AI Agents
In `@src/praisonai-agents/praisonaiagents/agent/dual_brain_agent.py` around lines
35 - 76, The parameter "reasoning" in DualBrainAgent.__init__ is unused and
causing ARG002; explicitly consume/forward it by setting kwargs['reasoning'] =
reasoning (or otherwise use it) before modifying other kwargs so the signature
is honored and the linter error is resolved; update the code where kwargs are
set (currently setting kwargs['reasoning_steps'], kwargs['self_reflect'],
kwargs['reflect_llm']) to also assign kwargs['reasoning'] = reasoning.
| def _generate_response_with_main_brain( | ||
| self, | ||
| original_query: str, | ||
| reasoning_analysis: str | ||
| ) -> str: | ||
| """ | ||
| Use the main LLM to generate the final conversational response. | ||
|
|
||
| Args: | ||
| original_query: Original user query | ||
| reasoning_analysis: Analysis from reasoning LLM | ||
|
|
||
| Returns: | ||
| Final conversational response | ||
| """ | ||
| response_prompt = f""" | ||
| Based on the analytical reasoning provided, generate a clear and helpful response to the user's query. | ||
|
|
||
| Original Query: {original_query} | ||
|
|
||
| Analytical Reasoning: | ||
| {reasoning_analysis} | ||
|
|
||
| Please provide a comprehensive response that: | ||
| 1. Addresses the user's query directly | ||
| 2. Incorporates insights from the analytical reasoning | ||
| 3. Is clear and conversational | ||
| 4. Shows confidence in the conclusions | ||
| 5. Acknowledges any reasoning steps taken | ||
|
|
||
| Format your response naturally while incorporating the analytical insights. | ||
| """ | ||
|
|
||
| # Use main LLM for response generation | ||
| return super().chat(response_prompt) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
chat() drops caller kwargs when generating the final response.
**kwargs is unused, so options like temperature/tools never reach the main LLM. Thread them through to _generate_response_with_main_brain.
🛠️ Suggested fix
- def _generate_response_with_main_brain(
- self,
- original_query: str,
- reasoning_analysis: str
- ) -> str:
+ def _generate_response_with_main_brain(
+ self,
+ original_query: str,
+ reasoning_analysis: str,
+ **kwargs
+ ) -> str:
@@
- return super().chat(response_prompt)
+ return super().chat(response_prompt, **kwargs)
@@
- final_response = self._generate_response_with_main_brain(message, reasoning_analysis)
+ final_response = self._generate_response_with_main_brain(
+ message, reasoning_analysis, **kwargs
+ )Also applies to: 215-249
🤖 Prompt for AI Agents
In `@src/praisonai-agents/praisonaiagents/agent/dual_brain_agent.py` around lines
179 - 214, The _generate_response_with_main_brain method currently ignores
caller options because it doesn't accept or forward **kwargs; update its
signature to def _generate_response_with_main_brain(self, original_query: str,
reasoning_analysis: str, **kwargs) and forward those into the LLM call by
returning super().chat(response_prompt, **kwargs); also find the other analogous
method (the one referenced at lines 215-249) and make the same change so any
temperature/tools/other options passed into chat are threaded through from
callers to the main LLM.
| def switch_reasoning_llm(self, new_reasoning_llm: str, config: Optional[Dict[str, Any]] = None): | ||
| """ | ||
| Switch the reasoning LLM to a different model. | ||
|
|
||
| Args: | ||
| new_reasoning_llm: New reasoning model name | ||
| config: Optional configuration for the new model | ||
| """ | ||
| self.reasoning_llm = new_reasoning_llm | ||
| self.reflect_llm = new_reasoning_llm # Update reflect_llm as well | ||
|
|
||
| if config: | ||
| self.reasoning_llm_config.update(config) | ||
| else: | ||
| self.reasoning_llm_config["model"] = new_reasoning_llm | ||
|
|
||
| def switch_main_llm(self, new_main_llm: str, config: Optional[Dict[str, Any]] = None): | ||
| """ | ||
| Switch the main LLM to a different model. | ||
|
|
||
| Args: | ||
| new_main_llm: New main model name | ||
| config: Optional configuration for the new model | ||
| """ | ||
| self.main_llm = new_main_llm | ||
| self.llm = new_main_llm | ||
|
|
||
| if config: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keep config model names in sync after LLM switches.
If config is provided without "model", the stored config can retain the old model. Set the model first, then merge.
🛠️ Suggested fix
def switch_reasoning_llm(self, new_reasoning_llm: str, config: Optional[Dict[str, Any]] = None):
@@
self.reasoning_llm = new_reasoning_llm
self.reflect_llm = new_reasoning_llm # Update reflect_llm as well
-
- if config:
- self.reasoning_llm_config.update(config)
- else:
- self.reasoning_llm_config["model"] = new_reasoning_llm
+ self.reasoning_llm_config["model"] = new_reasoning_llm
+ if config:
+ self.reasoning_llm_config.update(config)
@@
def switch_main_llm(self, new_main_llm: str, config: Optional[Dict[str, Any]] = None):
@@
self.main_llm = new_main_llm
self.llm = new_main_llm
-
- if config:
- self.llm_config.update(config)
+ self.llm_config["model"] = new_main_llm
+ if config:
+ self.llm_config.update(config)🤖 Prompt for AI Agents
In `@src/praisonai-agents/praisonaiagents/agent/dual_brain_agent.py` around lines
345 - 372, The configs can keep an old "model" value when a new LLM is passed
with a config that lacks "model": in switch_reasoning_llm and switch_main_llm,
ensure you set the "model" key on the incoming config (e.g., config["model"] =
new_reasoning_llm or new_main_llm) before merging/updating the stored config
dicts (reasoning_llm_config, main_llm_config); if no config is provided,
continue to set the stored config's "model" to the new model as currently done.
Also keep the existing behaviors that update reflect_llm and llm when switching
reasoning/main models.
| def __init__( | ||
| self, | ||
| name: Optional[str] = None, | ||
| role: Optional[str] = None, | ||
| goal: Optional[str] = None, | ||
| backstory: Optional[str] = None, | ||
| instructions: Optional[str] = None, | ||
| reasoning: bool = True, | ||
| reasoning_config: Optional[Union[ReasoningConfig, Dict[str, Any]]] = None, | ||
| min_confidence: float = 0.7, | ||
| reasoning_flow: Optional[ReasoningFlow] = None, | ||
| **kwargs | ||
| ): | ||
| """ | ||
| Initialize a ReasoningAgent. | ||
|
|
||
| Args: | ||
| name: Agent name | ||
| role: Agent role | ||
| goal: Agent goal | ||
| backstory: Agent backstory | ||
| instructions: Direct instructions | ||
| reasoning: Enable reasoning (always True for ReasoningAgent) | ||
| reasoning_config: Reasoning configuration | ||
| min_confidence: Minimum confidence threshold | ||
| reasoning_flow: Flow control configuration | ||
| **kwargs: Additional Agent parameters | ||
| """ | ||
| # Force reasoning to be enabled | ||
| kwargs['reasoning_steps'] = True | ||
| kwargs['self_reflect'] = kwargs.get('self_reflect', True) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused reasoning parameter triggers Ruff ARG002.
If it’s kept for API compatibility, explicitly consume it (or forward it) to avoid lint errors.
🔧 Suggested fix
- # Force reasoning to be enabled
+ # Force reasoning to be enabled (keep `reasoning` for signature compatibility)
+ _ = reasoning
kwargs['reasoning_steps'] = True🧰 Tools
🪛 Ruff (0.14.14)
[warning] 42-42: Unused method argument: reasoning
(ARG002)
🤖 Prompt for AI Agents
In `@src/praisonai-agents/praisonaiagents/agent/reasoning_agent.py` around lines
35 - 66, The __init__ currently accepts a reasoning parameter but never uses it
(causing Ruff ARG002); to keep API compatibility, explicitly consume/forward it
by assigning it into kwargs (e.g., set kwargs['reasoning'] = reasoning)
alongside the existing kwargs modifications (see __init__ and the existing
kwargs['reasoning_steps'] and kwargs['self_reflect'] assignments), so the
parameter is used and the lint error is resolved.
| def _enhance_instructions_for_reasoning(self): | ||
| """Enhance agent instructions with reasoning guidance.""" | ||
| reasoning_guidance = f""" | ||
|
|
||
| REASONING INSTRUCTIONS: | ||
| - Use step-by-step reasoning for all complex problems | ||
| - Show your thinking process explicitly | ||
| - Assess confidence for each reasoning step (0.0-1.0) | ||
| - Minimum {self.reasoning_config.min_steps} steps, maximum {self.reasoning_config.max_steps} steps | ||
| - Reasoning style: {self.reasoning_config.style} | ||
| - Minimum confidence threshold: {self.min_confidence} | ||
| """ | ||
|
|
||
| if self.instructions: | ||
| self.instructions += reasoning_guidance | ||
| else: | ||
| base_instructions = f"You are {self.role or 'an assistant'}" | ||
| if self.goal: | ||
| base_instructions += f" with the goal: {self.goal}" | ||
| self.instructions = base_instructions + reasoning_guidance | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
show_internal_thoughts config is ignored in prompt guidance.
The prompt always instructs the model to show thinking, even when the config disables it. Wire the flag into the guidance.
🛠️ Suggested fix
- reasoning_guidance = f"""
+ thought_instruction = (
+ "Show your thinking process explicitly"
+ if self.reasoning_config.show_internal_thoughts
+ else "Keep internal reasoning hidden; provide concise answers."
+ )
+ reasoning_guidance = f"""
REASONING INSTRUCTIONS:
- Use step-by-step reasoning for all complex problems
-- Show your thinking process explicitly
+- {thought_instruction}
- Assess confidence for each reasoning step (0.0-1.0)🤖 Prompt for AI Agents
In `@src/praisonai-agents/praisonaiagents/agent/reasoning_agent.py` around lines
92 - 112, _enhance_instructions_for_reasoning currently always injects "Show
your thinking process explicitly" regardless of the agent config; update it to
read the show_internal_thoughts flag (e.g.,
self.reasoning_config.show_internal_thoughts) and conditionally include either
the explicit chain-of-thought guidance or a privacy-safe alternative (e.g., "Do
not reveal internal chain-of-thought; provide concise step summaries and final
answer") in the reasoning_guidance string; keep all other fields (min_steps,
max_steps, style, min_confidence) and the existing logic that appends to
self.instructions intact so the prompt respects the flag.
| def reason_step( | ||
| agent: Any, | ||
| thought: str, | ||
| action: str, | ||
| min_confidence: float = 0.7 | ||
| ) -> ReasoningStep: | ||
| """ | ||
| Create a reasoning step with confidence validation. | ||
|
|
||
| Args: | ||
| agent: The agent performing the reasoning | ||
| thought: The reasoning thought/analysis | ||
| action: The action or conclusion from the thought | ||
| min_confidence: Minimum confidence required | ||
|
|
||
| Returns: | ||
| ReasoningStep with confidence scoring | ||
| """ | ||
| # Simulate confidence calculation (in real implementation, this could use LLM) | ||
| confidence = min(0.95, len(action) / 100.0 + 0.5) # Simple heuristic | ||
|
|
||
| step = ReasoningStep( | ||
| step_number=len(getattr(agent, 'reasoning_trace', {}).get('steps', [])) + 1, | ||
| title=f"Step {len(getattr(agent, 'reasoning_trace', {}).get('steps', [])) + 1}", | ||
| thought=thought, | ||
| action=action, | ||
| confidence=confidence | ||
| ) | ||
|
|
||
| # Validate confidence | ||
| if confidence < min_confidence: | ||
| step.action_state = ActionState.RESET | ||
|
|
||
| return step No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix step counting when reasoning_trace isn’t a dict.
getattr(...).get(...) will raise if reasoning_trace is None or a ReasoningTrace instance. Compute the step number from the actual trace type.
🛠️ Suggested fix
- step = ReasoningStep(
- step_number=len(getattr(agent, 'reasoning_trace', {}).get('steps', [])) + 1,
- title=f"Step {len(getattr(agent, 'reasoning_trace', {}).get('steps', [])) + 1}",
+ trace = getattr(agent, "reasoning_trace", None)
+ steps = (
+ trace.steps if isinstance(trace, ReasoningTrace)
+ else trace.get("steps", []) if isinstance(trace, dict)
+ else []
+ )
+ step_number = len(steps) + 1
+
+ step = ReasoningStep(
+ step_number=step_number,
+ title=f"Step {step_number}",
thought=thought,
action=action,
confidence=confidence
)🤖 Prompt for AI Agents
In `@src/praisonai-agents/praisonaiagents/reasoning.py` around lines 96 - 129, The
step-counting uses getattr(agent, 'reasoning_trace', {}).get('steps', []) which
breaks when reasoning_trace is None or a ReasoningTrace object; in reason_step,
first retrieve trace = getattr(agent, 'reasoning_trace', None) and normalize it
to a sequence: if trace is None set steps_list = [], elif isinstance(trace,
dict) use trace.get('steps', []), elif hasattr(trace, 'steps') use
getattr(trace, 'steps') (or list(trace.steps)), elif isinstance(trace, list) use
it directly; then compute step_number = len(steps_list) + 1 and use that for
ReasoningStep.step_number and the title to avoid calling .get on non-dict types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cursor Bugbot has reviewed your changes and found 10 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
This PR is being reviewed by Cursor Bugbot
Details
You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
|
|
||
| step = ReasoningStep( | ||
| step_number=len(getattr(agent, 'reasoning_trace', {}).get('steps', [])) + 1, | ||
| title=f"Step {len(getattr(agent, 'reasoning_trace', {}).get('steps', [])) + 1}", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reason_step uses dictionary .get() on Pydantic model
High Severity
The reason_step function calls .get('steps', []) on agent.reasoning_trace, but ReasoningTrace is a Pydantic BaseModel that doesn't have a .get() method. When reasoning_trace exists as a ReasoningTrace instance, this code raises an AttributeError. The code treats the object as a dictionary when it's actually a Pydantic model.
| confidence=0.85 | ||
| ) | ||
| self.reasoning_trace.steps.append(response_step) | ||
| self.last_reasoning_steps.append(response_step) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
last_reasoning_steps never cleared causing unbounded growth
Medium Severity
In DualBrainAgent.chat(), last_reasoning_steps is never cleared when starting a new chat session, unlike ReasoningAgent which clears it in start_reasoning_trace(). Each call to chat() appends steps, causing the list to grow indefinitely and reporting misleading information in get_brain_status().
|
|
||
| finally: | ||
| # Restore original LLM | ||
| self.llm = original_llm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LLM switching fails silently with custom LLM providers
High Severity
The _reason_with_analytical_brain method attempts to switch LLMs by setting self.llm = self.reasoning_llm, but this only works when _using_custom_llm is False. When the main LLM uses a provider/model format (like anthropic/claude-3-sonnet) or base_url, the parent Agent creates an llm_instance which the chat() method uses instead of self.llm. The reasoning LLM is silently ignored, and all calls use the main LLM.
|
|
||
| finally: | ||
| # Restore original task description | ||
| task.description = original_description |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
execute() creates incomplete reasoning trace with no steps
Medium Severity
In DualBrainAgent.execute(), a ReasoningTrace is created and the task analysis is performed via _reason_with_analytical_brain(), but no ReasoningStep is ever added to reasoning_trace.steps. Unlike chat() which adds steps, execute() leaves steps empty. This results in an incomplete reasoning trace with steps=[] and overall_confidence=0.0, making the trace data inconsistent and misleading.
| self.last_reasoning_steps.append(response_step) | ||
|
|
||
| # Complete reasoning trace | ||
| self.reasoning_trace.final_answer = final_response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LLM returning None causes Pydantic validation crash
High Severity
When Agent.chat() fails (due to LLM errors, guardrail failures, etc.), it returns None. In DualBrainAgent.chat(), this None is passed directly to ReasoningStep(action=reasoning_analysis) and assigned to reasoning_trace.final_answer. Since ReasoningStep.action and ReasoningTrace.final_answer are typed as str (not Optional[str]), Pydantic raises a ValidationError. The same issue exists in ReasoningAgent.chat() where complete_reasoning_trace(response) is called with potentially-None response.
Additional Locations (1)
| ReasoningStep, | ||
| ActionState, | ||
| ReasoningFlow, | ||
| reason_step |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| ReasoningTrace, | ||
| ReasoningStep, | ||
| ActionState, | ||
| ReasoningFlow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - Reasoning trace tracking | ||
| """ | ||
|
|
||
| from typing import List, Optional, Any, Dict, Union, Literal, TYPE_CHECKING |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - Step-by-step reasoning with flow control | ||
| """ | ||
|
|
||
| from typing import List, Optional, Any, Dict, Union, Literal, Callable, Tuple |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| self.min_confidence = min_confidence | ||
| self.reasoning_flow = reasoning_flow or ReasoningFlow() | ||
| self.reasoning_trace: Optional[ReasoningTrace] = None | ||
| self.last_reasoning_steps: List[ReasoningStep] = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicated reasoning config initialization across agents
Medium Severity
The reasoning config initialization logic (lines 77-82 in reasoning_agent.py and lines 92-97 in dual_brain_agent.py) is identical: converting dict to ReasoningConfig, creating default config if None, or using the provided config. This pattern also includes identical reasoning_trace and last_reasoning_steps initialization.


Implements ReasoningAgent and DualBrainAgent classes as requested in issue #968
Changes:
Features:
Generated with Claude Code
Note
Medium Risk
Mostly additive, but it introduces new agent behaviors that alter prompt content and temporarily switch the underlying
llm, which can affect runtime expectations and downstream integrations if adopted.Overview
Adds a new
reasoningmodule (e.g.,ReasoningConfig,ReasoningTrace,ReasoningStep,ActionState,ReasoningFlow,reason_step) to support step-based reasoning traces with confidence scoring and simple flow-control hooks.Introduces two new agent types:
ReasoningAgent, which wrapschat/executeto enforce step-by-step reasoning prompts and record a trace, andDualBrainAgent, which orchestrates separate models for analysis vs final response by temporarily switching the underlyingllmand appending analytical insights into task execution.Exports the new agents and reasoning types from package and
agentmodule__init__so they’re available as part of the public API.Written by Cursor Bugbot for commit c7fcc1a. This will update automatically on new commits. Configure here.
Summary by CodeRabbit