diff --git a/concepts/reasoning/overview.mdx b/concepts/reasoning/overview.mdx index 3d27c433..561ae49c 100644 --- a/concepts/reasoning/overview.mdx +++ b/concepts/reasoning/overview.mdx @@ -1,35 +1,56 @@ --- title: What is Reasoning? sidebarTitle: Overview -description: Reasoning gives Agents the ability to "think" before responding and "analyze" the results of their actions (i.e. tool calls), greatly improving the Agents' ability to solve problems that require sequential tool calls. +description: Give your agents the ability to think through problems step-by-step, validate their work, and self-correct, dramatically improving accuracy on complex tasks. --- -Reasoning Agents go through an internal chain of thought before responding, working through different ideas, validating and correcting as needed. +Imagine asking a regular AI agent to solve a complex math problem, analyze a scientific paper, or plan a multi-step travel itinerary. Often, it rushes to an answer without fully thinking through the problem. The result? Wrong calculations, incomplete analysis, or illogical plans. -## ReAct: Reason and Act +Now imagine an agent that pauses, thinks through the problem step-by-step, validates its reasoning, catches its own mistakes, and only then provides an answer. This is reasoning in action, and it transforms agents from quick responders into careful problem-solvers. -At the core of effective reasoning lies the **ReAct** (Reason and Act) methodology - a paradigm where agents alternate between reasoning about a problem and taking actions (like calling tools) to gather information or execute tasks. This iterative process allows agents to break down complex problems into manageable steps, validate their assumptions through action, and adjust their approach based on real-world feedback. +## Why Reasoning Matters -In Agno, ReAct principles are embedded throughout our reasoning implementations. -Whether an agent is using reasoning models to think through a problem, or employing reasoning tools to structure their thought process, they follow this fundamental pattern of reasoning → acting → observing → reasoning again until reaching a solution. +Without reasoning, agents struggle with tasks that require: -Agno supports 3 approaches to reasoning: +- **Multi-step thinking** - Breaking complex problems into logical steps +- **Self-validation** - Checking their own work before responding +- **Error correction** - Catching and fixing mistakes mid-process +- **Strategic planning** - Thinking ahead instead of reacting -1. [Reasoning Models](#reasoning-models) -2. [Reasoning Tools](#reasoning-tools) -3. [Reasoning Agents](#reasoning-agents) +**Example:** Ask a normal agent "Which is bigger: 9.11 or 9.9?" and it might incorrectly say 9.11 (comparing digit by digit instead of decimal values). A reasoning agent thinks through the decimal comparison logic first and gets it right. -Which approach works best will depend on your use case, we recommend trying them all and immersing yourself in this new era of Reasoning Agents! +## How Reasoning Works -## Reasoning Models +Agno supports multiple reasoning patterns, each suited for different problem-solving approaches: -Reasoning models are a separate class of large language models pre-trained to think before they answer. They produce an internal chain of thought before responding. Examples of reasoning models include OpenAI o-series, Claude 3.7 sonnet in extended-thinking mode, Gemini 2.0 flash thinking and DeepSeek-R1. +**Chain-of-Thought (CoT):** The model thinks through a problem step-by-step internally, breaking down complex reasoning into logical steps before producing an answer. This is used by reasoning models and reasoning agents. -Reasoning at the model layer is all about what the model does **before it starts generating a final response**. Reasoning models excel at single-shot use-cases. They're perfect for solving hard problems (coding, math, physics) that don't require multiple turns, or calling tools sequentially. +**ReAct (Reason and Act):** An iterative cycle where the agent alternates between reasoning and taking actions: -You can try any supported Agno model and if that model has reasoning capabilities, it will be used to reason about the problem. +1. **Reason** - Think through the problem, plan next steps +2. **Act** - Take action (call a tool, perform calculation) +3. **Observe** - Analyze the results +4. **Repeat** - Continue reasoning based on new information until solved -### Example +This pattern is particularly useful with reasoning tools and when agents need to validate assumptions through real-world feedback. + +## Three Approaches to Reasoning + +Agno gives you three ways to add reasoning to your agents, each suited for different use cases: + +### 1. Reasoning Models + +**What:** Pre-trained models that natively think before answering (e.g. OpenAI gpt-5, Claude 4.5 Sonnet, Gemini 2.0 Flash Thinking, DeepSeek-R1). + +**How it works:** The model generates an internal chain of thought before producing its final response. This happens at the model layer: you simply use the model and reasoning happens automatically. + +**Best for:** + +- Single-shot complex problems (math, coding, physics) +- Problems where you trust the model to handle reasoning internally +- Use cases where you don't need to control the reasoning process + +**Example:** ```python o3_mini.py from agno.agent import Agent @@ -46,17 +67,11 @@ agent.print_response( ) ``` -Read more about reasoning models in the [Reasoning Models Guide](/concepts/reasoning/reasoning-models). - -## Reasoning Model + Response Model - -What if we wanted to use a Reasoning Model to reason but a different model to generate the response? It is well known that reasoning models are great at solving problems but not that great at responding in a natural way (like Claude Sonnet or GPT-4o). +**Learn more:** [Reasoning Models Guide](/concepts/reasoning/reasoning-models) -By using a model to generate the response, and a different one for reasoning, we can have the best of both worlds: +#### Reasoning Model + Response Model -### Example - -Let's use DeepSeek-R1 from Groq for reasoning and Claude Sonnet for a natural response. +Here's a powerful pattern: use one model for reasoning (like DeepSeek-R1) and another for the final response (like GPT-4o). Why? Reasoning models are excellent at solving problems but often produce robotic or overly technical responses. By combining a reasoning model with a natural-sounding response model, you get accurate thinking with polished output. ```python deepseek_plus_claude.py from agno.agent import Agent @@ -65,7 +80,7 @@ from agno.models.groq import Groq # Setup your Agent using Claude as main model, and DeepSeek as reasoning model claude_with_deepseek_reasoner = Agent( - model=Claude(id="claude-3-7-sonnet-20250219"), + model=Claude(id="claude-3-5-sonnet-20241022"), reasoning_model=Groq( id="deepseek-r1-distill-llama-70b", temperature=0.6, max_tokens=1024, top_p=0.95 ), @@ -79,13 +94,19 @@ claude_with_deepseek_reasoner.print_response( ) ``` -## Reasoning Tools +### 2. Reasoning Tools + +**What:** Give any model explicit tools for thinking (like a scratchpad or notepad) to work through problems step-by-step. + +**How it works:** You provide tools like `think()` and `analyze()` that let the agent explicitly structure its reasoning process. The agent calls these tools to organize its thoughts before responding. -By giving a model **reasoning tools**, we can greatly improve its reasoning capabilities by providing a dedicated space for structured thinking. This is a simple, yet effective approach to add reasoning to non-reasoning models. +**Best for:** -The research was first published by Anthropic in [this blog post](https://www.anthropic.com/engineering/claude-think-tool) but has been practiced by many AI Engineers (including our own team) long before it was published. +- Adding reasoning to non-reasoning models (like regular GPT-4o or Claude 3.5 Sonnet) +- When you want visibility into the reasoning process +- Tasks that benefit from structured thinking (research, analysis, planning) -### Example +**Example:** ```python claude_reasoning_tools.py from agno.agent import Agent @@ -94,7 +115,7 @@ from agno.tools.reasoning import ReasoningTools # Setup our Agent with the reasoning tools reasoning_agent = Agent( - model=Claude(id="claude-3-7-sonnet-latest"), + model=Claude(id="claude-3-5-sonnet-20241022"), tools=[ ReasoningTools(add_instructions=True), ], @@ -111,30 +132,34 @@ reasoning_agent.print_response( ) ``` -Read more about reasoning tools in the [Reasoning Tools Guide](/concepts/reasoning/reasoning-tools). +**Learn more:** [Reasoning Tools Guide](/concepts/reasoning/reasoning-tools) -## Reasoning Agents +### 3. Reasoning Agents -Reasoning Agents are a new type of multi-agent system developed by Agno that combines chain of thought reasoning with tool use. +**What:** Transform any regular model into a reasoning system through structured chain-of-thought processing via prompt engineering. -You can enable reasoning on any Agent by setting `reasoning=True`. +**How it works:** Set `reasoning=True` on any agent. Agno creates a separate reasoning agent that uses **your same model** (not a different one) but with specialized prompting to force step-by-step thinking, tool use, and self-validation. Works best with non-reasoning models like gpt-4o or Claude Sonnet. With reasoning models like gpt-5-mini, you're usually better off using them directly. -When an Agent with `reasoning=True` is given a task, a separate "Reasoning Agent" first solves the problem using chain-of-thought. At each step, it calls tools to gather information, validate results, and iterate until it reaches a final answer. Once the Reasoning Agent has a final answer, it hands the results back to the original Agent to validate and provide a response. +**Best for:** -### Example +- Transforming regular models into reasoning systems +- Complex tasks requiring multiple sequential tool calls +- When you need automated chain-of-thought with iteration and self-correction + +**Example:** ```python reasoning_agent.py from agno.agent import Agent from agno.models.openai import OpenAIChat -# Setup our Agent with reasoning enabled +# Transform a regular model into a reasoning system reasoning_agent = Agent( - model=OpenAIChat(id="gpt-5-mini"), - reasoning=True, + model=OpenAIChat(id="gpt-4o"), # Regular model, not a reasoning model + reasoning=True, # Enables structured chain-of-thought markdown=True, ) -# Run the Agent +# The agent will now think step-by-step before responding reasoning_agent.print_response( "Solve the trolley problem. Evaluate multiple ethical frameworks. Include an ASCII diagram of your solution.", stream=True, @@ -142,4 +167,16 @@ reasoning_agent.print_response( ) ``` -Read more about reasoning agents in the [Reasoning Agents Guide](/concepts/reasoning/reasoning-agents). +**Learn more:** [Reasoning Agents Guide](/concepts/reasoning/reasoning-agents) + +## Choosing the Right Approach + +Here's how the three approaches compare: + +| Approach | Transparency | Best Use Case | Model Requirements | +| -------------------- | ---------------------------------- | ------------------------------ | --------------------------------- | +| **Reasoning Models** | Continuous (full reasoning trace) | Single-shot complex problems | Requires reasoning-capable models | +| **Reasoning Tools** | Structured (explicit step-by-step) | Structured research & analysis | Works with any model | +| **Reasoning Agents** | Iterative (agent interactions) | Multi-step tool-based tasks | Works with any model | + + diff --git a/concepts/reasoning/reasoning-agents.mdx b/concepts/reasoning/reasoning-agents.mdx index df6f2743..e174c015 100644 --- a/concepts/reasoning/reasoning-agents.mdx +++ b/concepts/reasoning/reasoning-agents.mdx @@ -1,171 +1,341 @@ --- title: Reasoning Agents +description: Transform any model into a reasoning system through structured chain-of-thought processing, perfect for complex problems that require multiple steps, tool use, and self-validation. --- -Reasoning Agents are a new type of multi-agent system developed by Agno that combines chain of thought reasoning with tool use. -You can enable reasoning on any Agent by setting `reasoning=True`. +**The problem:** Regular models often rush to answers on complex problems, missing steps or making logical errors. -When an Agent with `reasoning=True` is given a task, a separate "Reasoning Agent" first solves the problem using chain-of-thought. At each step, it calls tools to gather information, validate results, and iterate until it reaches a final answer. Once the Reasoning Agent has a final answer, it hands the results back to the original Agent to validate and provide a response. +**The solution:** Enable `reasoning=True` and watch your model break down the problem, explore multiple approaches, validate results, and deliver thoroughly vetted solutions. -### Example +**The beauty?** It works with any model, from GPT-4o to Claude to local models via Ollama. You're not limited to specialized reasoning models. + +## How It Works + +Enable reasoning on any agent by setting `reasoning=True`: + +```python +from agno.agent import Agent +from agno.models.openai import OpenAIChat + +reasoning_agent = Agent( + model=OpenAIChat(id="gpt-4o"), # Any model works + reasoning=True, +) +``` + +Behind the scenes, Agno creates a **separate reasoning agent instance** that uses your same model but with specialized prompting that guides it through a rigorous 6-step reasoning framework: + +### The Reasoning Framework + +1. **Problem Analysis** + + - Restate the task to ensure full comprehension + - Identify required information and necessary tools + +2. **Decompose and Strategize** + + - Break down the problem into subtasks + - Develop multiple distinct approaches + +3. **Intent Clarification and Planning** + + - Articulate the user's intent + - Select the best strategy with clear justification + - Create a detailed action plan + +4. **Execute the Action Plan** + + - For each step: document title, action, result, reasoning, next action, and confidence score + - Call tools as needed to gather information + - Self-correct if errors are detected + +5. **Validation (Mandatory)** + + - Cross-verify with alternative approaches + - Use additional tools to confirm accuracy + - Reset and revise if validation fails + +6. **Final Answer** + - Deliver the thoroughly validated solution + - Explain how it addresses the original task + +The reasoning agent works through these steps iteratively (up to 10 by default), building on previous results, calling tools, and self-correcting until it reaches a confident solution. Once complete, it hands the full reasoning back to your main agent for the final response. + +### How It Differs by Model Type + +**With regular models** (gpt-4o, Claude Sonnet, Gemini): + +- Forces structured chain-of-thought through the 6-step framework +- Creates detailed reasoning steps with confidence scores +- **This is where reasoning agents shine**: transforming any model into a reasoning system + +**With native reasoning models** (gpt-5-mini, DeepSeek-R1, o3-mini): + +- Uses the model's built-in reasoning capabilities +- Adds a validation pass from your main agent +- Useful for critical tasks but often unnecessary overhead for simpler problems + +## Basic Example + +Let's transform a regular GPT-4o model into a reasoning system: ```python reasoning_agent.py from agno.agent import Agent from agno.models.openai import OpenAIChat -# Setup our Agent with reasoning enabled +# Transform a regular model into a reasoning system reasoning_agent = Agent( - model=OpenAIChat(id="gpt-5-mini"), + model=OpenAIChat(id="gpt-4o"), reasoning=True, markdown=True, ) -# Run the Agent reasoning_agent.print_response( "Solve the trolley problem. Evaluate multiple ethical frameworks. Include an ASCII diagram of your solution.", stream=True, - show_full_reasoning=True, + show_full_reasoning=True, # Shows the complete reasoning process ) ``` -## Enabling Agentic Reasoning - -To enable Agentic Reasoning, set `reasoning=True` or set the `reasoning_model` to a model that supports structured outputs. - -If `reasoning_model` is not set, the primary `Agent` model will be used for reasoning. - -### Reasoning Model Requirements +### What You'll See -The `reasoning_model` must be able to handle structured outputs, this includes models like gpt-5-mini and claude-3-7-sonnet that support structured outputs natively or gemini models that support structured outputs using JSON mode. +With `show_full_reasoning=True`, you'll see: -### Using a Reasoning Model that supports native Reasoning +- **Each reasoning step** with its title, action, and result +- **The agent's thought process** including why it chose each approach +- **Tool calls made** during reasoning (if tools are provided) +- **Validation checks** performed to verify the solution +- **Confidence scores** for each step (0.0–1.0) +- **Self-corrections** if the agent detects errors +- **The final polished response** from your main agent -If you set `reasoning_model` to a model that supports native Reasoning like gpt-5-mini or deepseek-r1, the reasoning model will be used to reason and the primary `Agent` model will be used to respond. See [Reasoning Models + Response Models](/concepts/reasoning/reasoning-models#reasoning-model-%2B-response-model) for more information. +## Reasoning with Tools -## Reasoning with tools - -You can also use tools with a reasoning agent. Lets create a finance agent that can reason. +Here's where reasoning agents truly excel: combining multi-step reasoning with tool use. The reasoning agent can call tools iteratively, analyze results, and build toward a comprehensive solution. ```python finance_reasoning.py from agno.agent import Agent from agno.models.openai import OpenAIChat from agno.tools.duckduckgo import DuckDuckGoTools -# Setup our Agent with reasoning enabled reasoning_agent = Agent( - model=OpenAIChat(id="gpt-5-mini"), + model=OpenAIChat(id="gpt-4o"), tools=[DuckDuckGoTools()], - instructions=["Use tables to show data"], - markdown=True, + instructions=["Use tables to display data"], reasoning=True, + markdown=True, ) -# Run the Agent -reasoning_agent.print_response("What is going in Paris?", stream=True, show_full_reasoning=True) +reasoning_agent.print_response( + "Compare the market performance of NVDA, AMD, and INTC over the past quarter. What are the key drivers?", + stream=True, + show_full_reasoning=True, +) ``` -## More Examples +The reasoning agent will: -### Logical puzzles +1. Break down the task (need stock data for 3 companies) +2. Use DuckDuckGo to search for current market data +3. Analyze each company's performance +4. Search for news about key drivers +5. Validate findings across multiple sources +6. Create a comprehensive comparison with tables +7. Provide a final answer with clear insights -```python logical_puzzle.py -from agno.agent import Agent -from agno.models.openai import OpenAIChat +## Configuration Options -task = ( - "Three missionaries and three cannibals need to cross a river. " - "They have a boat that can carry up to two people at a time. " - "If, at any time, the cannibals outnumber the missionaries on either side of the river, the cannibals will eat the missionaries. " - "How can all six people get across the river safely? Provide a step-by-step solution and show the solutions as an ascii diagram" -) -reasoning_agent = Agent( - model=OpenAIChat(id="gpt-5-mini-2024-08-06"), reasoning=True, markdown=True +### Display Options + +Want to peek under the hood? Control what you see during reasoning: + +```python +agent.print_response( + "Your question", + show_full_reasoning=True, # Display complete reasoning process (default: False) ) -reasoning_agent.print_response(task, stream=True, show_full_reasoning=True) ``` -### Mathematical proofs +### Capturing Reasoning Events -```python mathematical_proof.py -from agno.agent import Agent -from agno.models.openai import OpenAIChat +For building custom UIs or programmatically tracking reasoning progress, you can capture reasoning events (`ReasoningStarted`, `ReasoningStep`, `ReasoningCompleted`) as they happen during streaming. See the [Reasoning Reference](/reference/reasoning/reasoning#reasoning-event-types) for event attributes and complete code examples. + +### Iteration Control + +Adjust how many reasoning steps the agent takes: -task = "Prove that for any positive integer n, the sum of the first n odd numbers is equal to n squared. Provide a detailed proof." +```python reasoning_agent = Agent( - model=OpenAIChat(id="gpt-5-mini-2024-08-06"), reasoning=True, markdown=True + model=OpenAIChat(id="gpt-4o"), + reasoning=True, + reasoning_min_steps=2, # Minimum reasoning steps (default: 1) + reasoning_max_steps=15, # Maximum reasoning steps (default: 10) ) -reasoning_agent.print_response(task, stream=True, show_full_reasoning=True) ``` -### Scientific research +- **`reasoning_min_steps`**: Ensures the agent thinks through at least this many steps before answering +- **`reasoning_max_steps`**: Prevents infinite loops by capping the iteration count + +### Custom Reasoning Agent + +For advanced use cases, you can provide your own reasoning agent: -```python scientific_research.py +```python from agno.agent import Agent from agno.models.openai import OpenAIChat -task = ( - "Read the following abstract of a scientific paper and provide a critical evaluation of its methodology," - "results, conclusions, and any potential biases or flaws:\n\n" - "Abstract: This study examines the effect of a new teaching method on student performance in mathematics. " - "A sample of 30 students was selected from a single school and taught using the new method over one semester. " - "The results showed a 15% increase in test scores compared to the previous semester. " - "The study concludes that the new teaching method is effective in improving mathematical performance among high school students." +# Create a custom reasoning agent with specific instructions +custom_reasoning_agent = Agent( + model=OpenAIChat(id="gpt-4o"), + instructions=[ + "Focus heavily on mathematical rigor", + "Always provide step-by-step proofs", + ], ) -reasoning_agent = Agent( - model=OpenAIChat(id="gpt-5-mini-2024-08-06"), reasoning=True, markdown=True + +main_agent = Agent( + model=OpenAIChat(id="gpt-4o"), + reasoning=True, + reasoning_agent=custom_reasoning_agent, # Use your custom agent ) -reasoning_agent.print_response(task, stream=True, show_full_reasoning=True) ``` -### Ethical dilemma +## Example Use Cases -```python ethical_dilemma.py -from agno.agent import Agent -from agno.models.openai import OpenAIChat + + + **Breaking down complex logic problems:** -task = ( - "You are a train conductor faced with an emergency: the brakes have failed, and the train is heading towards " - "five people tied on the track. You can divert the train onto another track, but there is one person tied there. " - "Do you divert the train, sacrificing one to save five? Provide a well-reasoned answer considering utilitarian " - "and deontological ethical frameworks. " - "Provide your answer also as an ascii art diagram." -) -reasoning_agent = Agent( - model=OpenAIChat(id="gpt-5-mini-2024-08-06"), reasoning=True, markdown=True -) -reasoning_agent.print_response(task, stream=True, show_full_reasoning=True) -``` + ```python logical_puzzle.py + from agno.agent import Agent + from agno.models.openai import OpenAIChat + + task = ( + "Three missionaries and three cannibals need to cross a river. " + "They have a boat that can carry up to two people at a time. " + "If, at any time, the cannibals outnumber the missionaries on either side of the river, the cannibals will eat the missionaries. " + "How can all six people get across the river safely? Provide a step-by-step solution and show the solution as an ASCII diagram." + ) -### Planning an itinerary + reasoning_agent = Agent( + model=OpenAIChat(id="gpt-4o"), + reasoning=True, + markdown=True, + ) -```python planning_itinerary.py -from agno.agent import Agent -from agno.models.openai import OpenAIChat + reasoning_agent.print_response(task, stream=True, show_full_reasoning=True) + ``` + -task = "Plan an itinerary from Los Angeles to Las Vegas" -reasoning_agent = Agent( - model=OpenAIChat(id="gpt-5-mini-2024-08-06"), reasoning=True, markdown=True -) -reasoning_agent.print_response(task, stream=True, show_full_reasoning=True) -``` + + **Problems requiring rigorous validation:** -### Creative writing + ```python mathematical_proof.py + from agno.agent import Agent + from agno.models.openai import OpenAIChat -```python creative_writing.py -from agno.agent import Agent -from agno.models.openai import OpenAIChat + task = "Prove that for any positive integer n, the sum of the first n odd numbers is equal to n squared. Provide a detailed proof." -task = "Write a short story about life in 5000000 years" -reasoning_agent = Agent( - model=OpenAIChat(id="gpt-5-mini-2024-08-06"), reasoning=True, markdown=True -) -reasoning_agent.print_response(task, stream=True, show_full_reasoning=True) -``` + reasoning_agent = Agent( + model=OpenAIChat(id="gpt-4o"), + reasoning=True, + markdown=True, + ) + + reasoning_agent.print_response(task, stream=True, show_full_reasoning=True) + ``` + + + + **Critical evaluation and multi-faceted analysis:** + + ```python scientific_research.py + from agno.agent import Agent + from agno.models.openai import OpenAIChat + + task = ( + "Read the following abstract of a scientific paper and provide a critical evaluation of its methodology, " + "results, conclusions, and any potential biases or flaws:\n\n" + "Abstract: This study examines the effect of a new teaching method on student performance in mathematics. " + "A sample of 30 students was selected from a single school and taught using the new method over one semester. " + "The results showed a 15% increase in test scores compared to the previous semester. " + "The study concludes that the new teaching method is effective in improving mathematical performance among high school students." + ) + + reasoning_agent = Agent( + model=OpenAIChat(id="gpt-4o"), + reasoning=True, + markdown=True, + ) + + reasoning_agent.print_response(task, stream=True, show_full_reasoning=True) + ``` + + + + **Sequential planning and optimization:** + + ```python planning_itinerary.py + from agno.agent import Agent + from agno.models.openai import OpenAIChat + + task = "Plan a 3-day itinerary from Los Angeles to Las Vegas, including must-see attractions, dining recommendations, and optimal travel times." + + reasoning_agent = Agent( + model=OpenAIChat(id="gpt-4o"), + reasoning=True, + markdown=True, + ) + + reasoning_agent.print_response(task, stream=True, show_full_reasoning=True) + ``` + + + + **Structured and coherent creative content:** + + ```python creative_writing.py + from agno.agent import Agent + from agno.models.openai import OpenAIChat + + task = "Write a short story about life in 500,000 years. Consider technological, biological, and societal evolution." + + reasoning_agent = Agent( + model=OpenAIChat(id="gpt-4o"), + reasoning=True, + markdown=True, + ) + + reasoning_agent.print_response(task, stream=True, show_full_reasoning=True) + ``` + + + +## When to Use Reasoning Agents + +**Use reasoning agents when:** + +- Your task requires multiple sequential steps +- You need the agent to call tools iteratively and build on results +- You want automated chain-of-thought without manually calling reasoning tools +- You need self-validation and error correction +- The problem benefits from exploring multiple approaches before settling on a solution + +**Consider alternatives when:** + +- You're using a native reasoning model (gpt-5-mini, DeepSeek-R1) for simple tasks: just use the model directly +- You want explicit control over when the agent thinks vs. acts: use [Reasoning Tools](/concepts/reasoning/reasoning-tools) instead +- The task is straightforward and doesn't require multi-step thinking + + + **Pro tip:** Start with `reasoning_max_steps=5` for simpler problems to avoid + unnecessary overhead. Increase to 10-15 for complex multi-step tasks. Monitor + with `show_full_reasoning=True` to see how many steps your agent actually + needs. + ## Developer Resources -- View [Reasoning Agent Examples](/examples/concepts/reasoning/agents/basic-cot) -- View [Reasoning Agent Cookbook](https://github.com/agno-agi/agno/tree/main/cookbook/reasoning/agents) -- View [Reasoning Team Examples](/examples/concepts/reasoning/teams/finance_team_chain_of_thought) -- View [Reasoning Team Cookbook](https://github.com/agno-agi/agno/blob/main/cookbook/reasoning/teams) +- [Reasoning Agent Examples](/examples/concepts/reasoning/agents/basic-cot) +- [Reasoning Agent Cookbook](https://github.com/agno-agi/agno/tree/main/cookbook/reasoning/agents) diff --git a/concepts/reasoning/reasoning-tools.mdx b/concepts/reasoning/reasoning-tools.mdx index f57c798f..805527d9 100644 --- a/concepts/reasoning/reasoning-tools.mdx +++ b/concepts/reasoning/reasoning-tools.mdx @@ -1,167 +1,407 @@ --- title: Reasoning Tools +description: Give any model explicit tools for structured thinking, transforming regular models into careful problem-solvers through deliberate reasoning steps. --- -A new class of research is emerging where giving models tools for structured thinking, like a scratchpad, greatly improves their reasoning capabilities. +**The problem:** Reasoning Agents force systematic thinking on every request. Reasoning Models require specialized models. What if you want reasoning only when needed, tailored to specific contexts? -For example, by giving a model **reasoning tools**, we can greatly improve its reasoning capabilities by providing a dedicated space for working through the problem. This is a simple, yet effective approach to add reasoning to non-reasoning models. +**The solution:** Reasoning Tools give your agent explicit `think()` and `analyze()` tools, and let the agent decide when to use them. The agent chooses when to reason, when to act, and when it has enough information to respond. -First published by Anthropic in [this blog post](https://www.anthropic.com/engineering/claude-think-tool), this technique has been practiced by many AI Engineers (including our own team) long before it was published. +Agno provides **four specialized reasoning toolkits**, each optimized for different domains: -## Reasoning Tools +| Toolkit | Purpose | Core Tools | +| ------------------ | -------------------------------------- | -------------------------------------------------------- | +| **ReasoningTools** | General-purpose thinking and analysis | `think()`, `analyze()` | +| **KnowledgeTools** | Reasoning with knowledge base searches | `think()`, `search_knowledge()`, `analyze()` | +| **MemoryTools** | Reasoning about user memory operations | `think()`, `get/add/update/delete_memory()`, `analyze()` | +| **WorkflowTools** | Reasoning about workflow execution | `think()`, `run_workflow()`, `analyze()` | -The first version of the Reasoning Tools, previously known as Thinking tools, was published by Anthropic in [this blog post](https://www.anthropic.com/engineering/claude-think-tool). +All four toolkits follow the same **Think → Act → Analyze** pattern but provide domain-specific actions tailored to their use case. -```python claude_reasoning_tools.py +This approach was first popularized by Anthropic in their ["Extended Thinking" blog post](https://www.anthropic.com/engineering/claude-think-tool), though many AI engineers (including our team) were using similar patterns long before. + +## Why Reasoning Tools? + +Reasoning Tools give you the **best of both worlds**: + +1. **Works with any model** - Even models without native reasoning capabilities +2. **Explicit control** - The agent decides when to think vs. when to act +3. **Full transparency** - You see exactly what the agent is thinking +4. **Flexible workflow** - The agent can interleave thinking with tool calls +5. **Domain-optimized** - Each toolkit is specialized for its specific use case +6. **Natural reasoning** - Feels more like human problem-solving (think, act, analyze, repeat) + +**The key difference:** With Reasoning Agents, the reasoning happens automatically in a structured loop. With Reasoning Tools, the agent explicitly chooses when to use the `think()` and `analyze()` tools, giving you more control and visibility. + +## The Four Reasoning Toolkits + +### 1. ReasoningTools - General Purpose Thinking + +For general problem-solving without domain-specific tools. + +**What it provides:** + +- `think()` - Plan and reason about the problem +- `analyze()` - Evaluate results and determine next steps + +**When to use:** + +- Mathematical or logical problems +- Strategic planning +- Analysis tasks that don't require external data +- Any scenario where you want structured reasoning + +**Example:** + +```python from agno.agent import Agent -from agno.models.anthropic import Claude +from agno.models.openai import OpenAIChat from agno.tools.reasoning import ReasoningTools -from agno.tools.duckduckgo import DuckDuckGoTools -# Setup our Agent with the reasoning tools -reasoning_agent = Agent( - model=Claude(id="claude-3-7-sonnet-latest"), - tools=[ - ReasoningTools(add_instructions=True), - DuckDuckGoTools(), - ], - instructions="Use tables where possible", - markdown=True, +agent = Agent( + model=OpenAIChat(id="gpt-4o"), + tools=[ReasoningTools(add_instructions=True)], ) -# Run the Agent -reasoning_agent.print_response( - "What are the fastest cars in the market? Only the report, no other text.", +agent.print_response( + "Which is bigger: 9.11 or 9.9? Explain your reasoning.", stream=True, - show_full_reasoning=True, - stream_intermediate_steps=True, ) ``` -See the [Reasoning Tools](/concepts/tools/reasoning_tools/reasoning-tools) documentation for more details. +### 2. KnowledgeTools - Reasoning with Knowledge Bases + +For searching and analyzing information from knowledge bases (RAG). + +**What it provides:** -## Knowledge Tools +- `think()` - Plan search strategy and refine approach +- `search_knowledge()` - Query the knowledge base +- `analyze()` - Evaluate search results for relevance and completeness -The Knowledge Tools take the Reasoning Tools one step further by allowing the Agent to **search** a knowledge base and **analyze** the results of their actions. +**When to use:** -**KnowledgeTools = `think` + `search` + `analyze`** +- Document retrieval and analysis +- RAG (Retrieval-Augmented Generation) workflows +- Research tasks requiring multiple search iterations +- When you need to verify information from knowledge bases -```python knowledge_tools.py -import os +**Example:** + +```python from agno.agent import Agent -from agno.knowledge.embedder.openai import OpenAIEmbedder -from agno.knowledge.knowledge import Knowledge +from agno.knowledge.pdf import PDFKnowledgeBase from agno.models.openai import OpenAIChat from agno.tools.knowledge import KnowledgeTools -from agno.vectordb.lancedb import LanceDb, SearchType - - -agno_docs = Knowledge( - vector_db=LanceDb( - uri="tmp/lancedb", - table_name="agno_docs", - search_type=SearchType.hybrid, - embedder=OpenAIEmbedder(id="text-embedding-3-small"), +from agno.vectordb.pgvector import PgVector + +# Create knowledge base +knowledge = PDFKnowledgeBase( + path="data/research_papers/", + vector_db=PgVector( + table_name="research_papers", + db_url="postgresql+psycopg://ai:ai@localhost:5532/ai", ), ) +agent = Agent( + model=OpenAIChat(id="gpt-4o"), + tools=[KnowledgeTools(knowledge=knowledge, add_instructions=True)], + instructions="Search thoroughly and cite your sources", +) -knowledge_tools = KnowledgeTools( - knowledge=agno_docs, - think=True, - search=True, - analyze=True, - add_few_shot=True, +agent.print_response( + "What are the latest findings on quantum entanglement in our research papers?", + stream=True, ) +``` +**How it works:** -agent = Agent( - model=OpenAIChat(id="gpt-5-mini"), - tools=[knowledge_tools], - markdown=True, -) +1. Agent calls `think()`: "I need to search for quantum entanglement. Let me try multiple search terms." +2. Agent calls `search_knowledge("quantum entanglement")` +3. Agent calls `analyze()`: "Results are too broad. Need more specific search." +4. Agent calls `search_knowledge("quantum entanglement recent findings")` +5. Agent calls `analyze()`: "Now I have sufficient, relevant results." +6. Agent provides final answer +### 3. MemoryTools - Reasoning about User Memories -agno_docs.add_content( - url="https://docs.agno.com/llms-full.txt" -) +For managing and reasoning about user memories with CRUD operations. +**What it provides:** -agent.print_response("How do I build multi-agent teams with Agno?", stream=True) -``` +- `think()` - Plan memory operations +- `get_memories()` - Retrieve user memories +- `add_memory()` - Store new memories +- `update_memory()` - Modify existing memories +- `delete_memory()` - Remove memories +- `analyze()` - Evaluate memory operations -See the [Knowledge Tools](/concepts/tools/reasoning_tools/knowledge-tools) documentation for more details. +**When to use:** -## Memory Tools +- Personalized agent interactions +- User preference management +- Maintaining conversation context across sessions +- Building user profiles over time -The Memory Tools allow the Agent to use memories to reason about the question and work through it step by step. +**Example:** -```python memory_tools.py +```python from agno.agent import Agent -from agno.db.sqlite import SqliteDb +from agno.db.postgres import PostgresDb from agno.models.openai import OpenAIChat from agno.tools.memory import MemoryTools -# Create a database connection -db = SqliteDb( - db_file="tmp/memory.db" -) - -memory_tools = MemoryTools( - db=db, +db = PostgresDb( + db_url="postgresql+psycopg://ai:ai@localhost:5532/ai", ) agent = Agent( - model=OpenAIChat(id="gpt-5-mini"), - tools=[memory_tools], - markdown=True, + model=OpenAIChat(id="gpt-4o"), + tools=[MemoryTools(db=db, add_instructions=True)], + db=db, ) agent.print_response( - "My name is John Doe and I like to hike in the mountains on weekends. " - "I like to travel to new places and experience different cultures. " - "I am planning to travel to Africa in December. ", - user_id="john_doe@example.com", - stream=True + "I prefer vegetarian recipes and I'm allergic to nuts.", + user_id="user_123", ) - -# This won't use the session history, but instead will use the memory tools to get the memories -agent.print_response("What have you remembered about me?", stream=True, user_id="john_doe@example.com") ``` -See the [Memory Tools](/concepts/tools/reasoning_tools/memory-tools) documentation for more details. +**How it works:** + +1. Agent calls `think()`: "User is sharing dietary preferences. I should store this." +2. Agent calls `add_memory(memory="User prefers vegetarian recipes and is allergic to nuts", topics=["dietary_preferences", "allergies"])` +3. Agent calls `analyze()`: "Memory successfully stored with appropriate topics." +4. Agent responds to user confirming the information was saved + +### 4. WorkflowTools - Reasoning about Workflow Execution + +For executing and analyzing complex workflows. -## Workflow Tools +**What it provides:** +- `think()` - Plan workflow inputs and strategy +- `run_workflow()` - Execute a workflow with specific inputs +- `analyze()` - Evaluate workflow results -The Workflow Tools allow the Agent to execute a workflow and reason about the results. +**When to use:** -```python workflow_tools.py +- Multi-step automated processes +- Complex task orchestration +- When workflows need different inputs based on context +- A/B testing different workflow configurations + +**Example:** + +```python from agno.agent import Agent from agno.models.openai import OpenAIChat from agno.tools.workflow import WorkflowTools +from agno.workflow import Workflow +from agno.workflow.step import Step + +# Define a research workflow +research_workflow = Workflow( + name="research-workflow", + steps=[ + Step(name="search", agent=search_agent), + Step(name="summarize", agent=summary_agent), + Step(name="fact-check", agent=fact_check_agent), + ], +) -# Create your workflow -# ... +# Create agent with workflow tools +orchestrator = Agent( + model=OpenAIChat(id="gpt-4o"), + tools=[WorkflowTools(workflow=research_workflow, add_instructions=True)], +) -workflow_tools = WorkflowTools( - workflow=blog_post_workflow, +orchestrator.print_response( + "Research climate change impacts on agriculture", + stream=True, ) +``` + +**How it works:** + +1. Agent calls `think()`: "I need to run the research workflow with 'climate change agriculture' as input." +2. Agent calls `run_workflow(input_data="climate change impacts on agriculture")` +3. Workflow executes all steps (search → summarize → fact-check) +4. Agent calls `analyze()`: "Workflow completed successfully. All fact-checks passed." +5. Agent provides final synthesized answer + +## Common Pattern: Think → Act → Analyze + +All four toolkits follow the same reasoning cycle: + +1. **THINK** - Plan what to do, refine approach, brainstorm +2. **ACT** (Domain-Specific) + - ReasoningTools: Direct reasoning + - KnowledgeTools: search_knowledge() + - MemoryTools: get/add/update/delete_memory() + - WorkflowTools: run_workflow() +3. **ANALYZE** - Evaluate results, decide next action +4. **REPEAT** - Loop back to THINK if needed, or provide answer + +This mirrors how humans solve complex problems: we think before acting, evaluate results, and adjust our approach based on what we learn. + +## Choosing the Right Reasoning Toolkit + +| If you need to... | Use | Example | +| ------------------------------------ | --------------------- | ------------------------------------------------------ | +| Solve logic puzzles or math problems | `ReasoningTools` | "Solve: If x² + 5x + 6 = 0, what is x?" | +| Search through documents | `KnowledgeTools` | "Find all mentions of user authentication in our docs" | +| Remember user preferences | `MemoryTools` | "Remember that I'm allergic to shellfish" | +| Orchestrate complex multi-step tasks | `WorkflowTools` | "Research, write, and fact-check an article" | +| Combine multiple domains | Use multiple toolkits | See examples for more patterns | + +## Combining Multiple Reasoning Toolkits + +You can use multiple reasoning toolkits together for powerful multi-domain reasoning: + +```python +from agno.agent import Agent +from agno.knowledge.pdf import PDFKnowledgeBase +from agno.models.openai import OpenAIChat +from agno.tools.knowledge import KnowledgeTools +from agno.tools.memory import MemoryTools +from agno.tools.reasoning import ReasoningTools agent = Agent( - model=OpenAIChat(id="gpt-5-mini"), - tools=[workflow_tools], - markdown=True, + model=OpenAIChat(id="gpt-4o"), + tools=[ + ReasoningTools(add_instructions=True), + KnowledgeTools(knowledge=my_knowledge, add_instructions=True), + MemoryTools(db=my_db, add_instructions=True), + ], + instructions="Use reasoning for planning, knowledge for facts, and memory for personalization", +) +``` + +## Configuration Options + +### Enable/Disable Specific Tools + +You can control which reasoning tools are available: + +```python +# Only thinking, no analysis +ReasoningTools(enable_think=True, enable_analyze=False) + +# Only analysis, no thinking +ReasoningTools(enable_think=False, enable_analyze=True) + +# Both (default) +ReasoningTools(enable_think=True, enable_analyze=True) + +# Shorthand for both +ReasoningTools() +``` + +### Add Instructions Automatically + +The `add_instructions` parameter automatically includes detailed reasoning guidelines in your agent's instructions: + +```python +ReasoningTools(add_instructions=True) +``` + +This adds comprehensive instructions explaining: + +- When to use `think()` vs `analyze()` +- The Think → Act → Analyze workflow +- How to determine `next_action` values +- Best practices for reasoning + +**When to use:** Almost always! This ensures the agent understands how to use the tools effectively. + +### Add Few-Shot Examples + +Include example reasoning workflows to guide the agent: + +```python +ReasoningTools(add_instructions=True, add_few_shot=True) +``` + +This adds practical examples showing: + +- Simple fact retrieval with reasoning +- Multi-step information gathering +- How to use tools in parallel +- When to set `next_action` to different values + +**When to use:** + +- When working with less capable models +- For complex reasoning tasks +- When you want consistent reasoning patterns + +### Custom Instructions + +Provide your own custom instructions for specialized reasoning: + +```python +custom_instructions = """ +Use the think and analyze tools for rigorous scientific reasoning: +- Always think before making claims +- Cite evidence in your analysis +- Acknowledge uncertainty +- Consider alternative hypotheses +""" + +ReasoningTools( + instructions=custom_instructions, + add_instructions=False # Don't include default instructions +) +``` + +### Custom Few-Shot Examples + +Provide domain-specific examples: + +```python +medical_examples = """ +Example: Medical Diagnosis + +User: Patient has fever and cough for 3 days. + +Agent thinks: +think( + title="Gather Symptoms", + thought="Need to collect all symptoms and their duration. Fever and cough suggest respiratory infection. Should check for other symptoms.", + action="Ask about additional symptoms", + confidence=0.9 ) +""" -agent.print_response("Create a blog post on the topic: AI trends in 2024", stream=True) +ReasoningTools( + add_instructions=True, + add_few_shot=True, + few_shot_examples=medical_examples # Your custom examples +) ``` -See the [Workflow Tools](/concepts/tools/reasoning_tools/workflow-tools) documentation for more details. +## Monitoring Your Agent's Thinking + +Use `show_full_reasoning=True` and `stream_intermediate_steps=True` to display reasoning steps in real-time. See [Display Options in Reasoning Agents](/concepts/reasoning/reasoning-agents#display-options) for details and [Reasoning Reference](/reference/reasoning/reasoning#display-parameters) for programmatic access to reasoning steps. + +## Reasoning Tools vs. Reasoning Agents + +Both approaches add reasoning to any model, but they differ in control and automation: + +| Aspect | Reasoning Tools | Reasoning Agents | +| ---------------- | ---------------------------------------- | -------------------------------------------------- | +| **Activation** | Agent decides when to use `think()` | Automatic on every request | +| **Control** | Explicit tool calls | Automated loop | +| **Transparency** | See every `think()` and `analyze()` call | See structured reasoning steps | +| **Workflow** | Agent-driven (flexible) | Framework-driven (structured) | +| **Best for** | Research, analysis, exploratory tasks | Complex multi-step problems with defined structure | + +**Rule of thumb:** + +- Use **Reasoning Tools** when you want the agent to control its own reasoning process +- Use **Reasoning Agents** when you want guaranteed systematic thinking for every request -## Developer Resources -- View [Agents with Reasoning Tools Examples](/examples/concepts/reasoning/tools) -- View [Agents with Reasoning Tools Cookbook](https://github.com/agno-agi/agno/tree/main/cookbook/reasoning/tools) -- View [Teams with Reasoning Tools Examples](/examples/concepts/reasoning/teams/reasoning-finance-team) -- View [Teams with Reasoning Tools Cookbook](https://github.com/agno-agi/agno/tree/main/cookbook/reasoning/teams) diff --git a/docs.json b/docs.json index 7f735cae..8befeaed 100644 --- a/docs.json +++ b/docs.json @@ -363,8 +363,8 @@ "pages": [ "concepts/reasoning/overview", "concepts/reasoning/reasoning-models", - "concepts/reasoning/reasoning-tools", - "concepts/reasoning/reasoning-agents" + "concepts/reasoning/reasoning-agents", + "concepts/reasoning/reasoning-tools" ] }, { @@ -2982,6 +2982,10 @@ "group": "Memory", "pages": ["reference/memory/memory"] }, + { + "group": "Reasoning", + "pages": ["reference/reasoning/reasoning"] + }, { "group": "Storage", "pages": [ diff --git a/reference/reasoning/reasoning.mdx b/reference/reasoning/reasoning.mdx new file mode 100644 index 00000000..304eb879 --- /dev/null +++ b/reference/reasoning/reasoning.mdx @@ -0,0 +1,185 @@ +--- +title: Reasoning Reference +sidebarTitle: Reasoning +--- + +This reference covers the core data structures and events used across all reasoning approaches in Agno (Reasoning Models, Reasoning Tools, and Reasoning Agents). + +## ReasoningStep + +The `ReasoningStep` class represents a single step in the reasoning process, whether generated by Reasoning Tools, Reasoning Agents, or native reasoning models. + +### Attributes + +| Attribute | Type | Default | Description | +| ------------- | -------------------------- | --------------------- | --------------------------------------------------------- | +| `title` | `Optional[str]` | `None` | A concise title for this reasoning step | +| `reasoning` | `Optional[str]` | `None` | The detailed thought process or reasoning for this step | +| `action` | `Optional[str]` | `None` | The action to be taken based on this reasoning | +| `result` | `Optional[str]` | `None` | The outcome or result of executing the action | +| `next_action` | `NextAction` | `NextAction.CONTINUE` | What to do next (continue, validate, final_answer, reset) | +| `confidence` | `float` | `0.8` | Confidence level for this step (0.0 to 1.0) | +| `metadata` | `Optional[Dict[str, Any]]` | `None` | Additional metadata for this step | + +### NextAction Enum + +The `NextAction` enum defines possible next steps in the reasoning process: + +| Value | Description | +| -------------- | -------------------------------------------------------- | +| `CONTINUE` | Continue with more reasoning steps | +| `VALIDATE` | Validate the current solution before finalizing | +| `FINAL_ANSWER` | Ready to provide the final answer | +| `RESET` | Reset and restart the reasoning process (error detected) | + +## ReasoningSteps + +The `ReasoningSteps` class is a container for multiple reasoning steps, used as the structured output for Reasoning Agents. + +### Attributes + +| Attribute | Type | Default | Description | +| ----------------- | -------------------------- | ------- | ----------------------------------------------- | +| `reasoning_steps` | `List[ReasoningStep]` | `[]` | List of reasoning steps taken | +| `metadata` | `Optional[Dict[str, Any]]` | `None` | Additional metadata about the reasoning process | + +## ReasoningTools + +The `ReasoningTools` toolkit provides explicit tools for structured thinking. + +### Constructor Parameters + +| Parameter | Type | Default | Description | +| ------------------- | --------------- | ------- | --------------------------------------- | +| `enable_think` | `bool` | `True` | Enable the `think()` tool | +| `enable_analyze` | `bool` | `True` | Enable the `analyze()` tool | +| `all` | `bool` | `False` | Legacy parameter to enable both tools | +| `instructions` | `Optional[str]` | `None` | Custom instructions for using the tools | +| `add_instructions` | `bool` | `False` | Add default instructions to agent | +| `add_few_shot` | `bool` | `False` | Add few-shot examples to instructions | +| `few_shot_examples` | `Optional[str]` | `None` | Custom few-shot examples | + +### Methods + +#### think() + +Use as a scratchpad to reason about problems step-by-step. + +**Parameters:** + +| Parameter | Type | Default | Description | +| --------------- | ---------------- | -------- | ---------------------------------------------- | +| `session_state` | `Dict[str, Any]` | Required | Agent's session state (automatically provided) | +| `title` | `str` | Required | Concise title for this thinking step | +| `thought` | `str` | Required | Detailed reasoning for this step | +| `action` | `Optional[str]` | `None` | What you'll do based on this thought | +| `confidence` | `float` | `0.8` | Confidence level (0.0 to 1.0) | + +**Returns:** `str` - Formatted list of all reasoning steps taken so far + +#### analyze() + +Analyze results from previous actions and determine next steps. + +**Parameters:** + +| Parameter | Type | Default | Description | +| --------------- | ---------------- | ------------ | ---------------------------------------------------------- | +| `session_state` | `Dict[str, Any]` | Required | Agent's session state (automatically provided) | +| `title` | `str` | Required | Concise title for this analysis | +| `result` | `str` | Required | Outcome of the previous action | +| `analysis` | `str` | Required | Your evaluation of the results | +| `next_action` | `str` | `"continue"` | What to do next: "continue", "validate", or "final_answer" | +| `confidence` | `float` | `0.8` | Confidence level (0.0 to 1.0) | + +**Returns:** `str` - Formatted list of all reasoning steps taken so far + +## Reasoning Events + +Events emitted during reasoning processes when using Reasoning Agents or Reasoning Models. + +### Event Types + +| Event Type | Description | +| -------------------- | -------------------------------------------- | +| `ReasoningStarted` | Indicates the start of the reasoning process | +| `ReasoningStep` | Contains a single reasoning step | +| `ReasoningCompleted` | Signals completion of the reasoning process | + +### ReasoningStartedEvent + +Emitted when reasoning begins. + +**Attributes:** + +| Attribute | Type | Default | Description | +| ------------ | --------------- | -------------------- | -------------------------------- | +| `event` | `str` | `"ReasoningStarted"` | Event type | +| `run_id` | `Optional[str]` | `None` | ID of the current run | +| `agent_id` | `Optional[str]` | `None` | ID of the reasoning agent | +| `created_at` | `int` | Current timestamp | Unix timestamp of event creation | + +### ReasoningStepEvent + +Emitted for each reasoning step during the process. + +**Attributes:** + +| Attribute | Type | Default | Description | +| ------------------- | --------------- | ----------------- | ---------------------------------------- | +| `event` | `str` | `"ReasoningStep"` | Event type | +| `content` | `Optional[Any]` | `None` | Content of the reasoning step | +| `content_type` | `str` | `"str"` | Type of the content | +| `reasoning_content` | `str` | `""` | Detailed reasoning content for this step | +| `run_id` | `Optional[str]` | `None` | ID of the current run | +| `agent_id` | `Optional[str]` | `None` | ID of the reasoning agent | +| `step_number` | `Optional[int]` | `None` | The sequential number of this step | +| `created_at` | `int` | Current timestamp | Unix timestamp of event creation | + +### ReasoningCompletedEvent + +Emitted when reasoning finishes. + +**Attributes:** + +| Attribute | Type | Default | Description | +| -------------------- | ------------------------------- | ---------------------- | ----------------------------------- | +| `event` | `str` | `"ReasoningCompleted"` | Event type | +| `content` | `Optional[Any]` | `None` | Final reasoning content | +| `content_type` | `str` | `"str"` | Type of the content | +| `reasoning_steps` | `Optional[List[ReasoningStep]]` | `None` | All reasoning steps taken | +| `reasoning_messages` | `Optional[List[Message]]` | `None` | Messages from the reasoning process | +| `run_id` | `Optional[str]` | `None` | ID of the current run | +| `agent_id` | `Optional[str]` | `None` | ID of the reasoning agent | +| `created_at` | `int` | Current timestamp | Unix timestamp of event creation | + +## Agent Configuration for Reasoning + +### Reasoning Agent Parameters + +Parameters for configuring Reasoning Agents (`reasoning=True`): + +| Parameter | Type | Default | Description | +| --------------------- | ----------------- | ------- | ----------------------------------------------------------- | +| `reasoning` | `bool` | `False` | Enable reasoning agent mode | +| `reasoning_model` | `Optional[Model]` | `None` | Separate model for reasoning (if different from main model) | +| `reasoning_agent` | `Optional[Agent]` | `None` | Custom reasoning agent instance | +| `reasoning_min_steps` | `int` | `1` | Minimum number of reasoning steps | +| `reasoning_max_steps` | `int` | `10` | Maximum number of reasoning steps | + +### Display Parameters + +Parameters for showing reasoning during execution: + +| Parameter | Type | Default | Description | +| --------------------------- | ------ | ------- | -------------------------------------------- | +| `show_full_reasoning` | `bool` | `False` | Display complete reasoning process in output | +| `stream_intermediate_steps` | `bool` | `False` | Stream each reasoning step in real-time | + +## See Also + +- [Reasoning Overview](/concepts/reasoning/introduction) - Introduction to reasoning approaches +- [Reasoning Agents Guide](/concepts/reasoning/reasoning-agents) - Using Reasoning Agents +- [Reasoning Tools Guide](/concepts/reasoning/reasoning-tools) - Using Reasoning Tools +- [Reasoning Models Guide](/concepts/reasoning/reasoning-models) - Using native reasoning models +