Zum Inhalt springen
>_<
AI EngineeringWiki

Papers

ReAct: Synergizing Reasoning and Acting in Language Models

Yao et al., 2022 β€” The paper that shows how LLMs solve complex tasks by alternating between thinking and acting.

Reading time: 10 minLast updated: March 2026
At a Glance

ReAct (Reasoning + Acting) is a prompting pattern where an LLM alternates between thinking (reasoning) and doing (acting). Instead of just generating text, the model can call tools, interpret results, and adapt its approach. ReAct is the foundation for most current AI agent frameworks.

The Problem: Thinking Alone Isn't Enough

Chain-of-Thought (CoT) prompting showed that LLMs improve when they write out their reasoning step by step. But pure thinking has limits: The model cannot access current information, perform calculations, or query external systems.

Conversely, there are systems that let LLMs use tools (acting), but without explicit reasoning. These often act blindly β€” without a plan, error analysis, or strategy adjustment.

The ReAct Idea: Thinking AND Acting

ReAct combines both in an alternating loop:

  • Thought: The model thinks β€” analyzes the current situation, plans the next step, interprets previous results.
  • Action: The model executes a concrete action β€” e.g., a web search, calculation, or API call.
  • Observation: The result of the action is returned to the model. It serves as input for the next thought.

This cycle repeats until the task is solved.

Diagramm wird geladen...

ReAct in Action: An Example

Question: "In what year was the capital of the country where the Transformer was invented founded?"

Thought 1: I need to find out where the Transformer was invented. The paper came from Google Brain/Google Research.
Action 1: Search["Attention Is All You Need paper authors location"]
Observation 1: The paper was authored by researchers at Google in Mountain View, USA, and the University of Toronto.
Thought 2: The authors were at Google in the USA. The capital of the USA is Washington, D.C. I need to find the founding year.
Action 2: Search["Washington D.C. founding year"]
Observation 2: Washington, D.C. was founded on July 16, 1790.
Thought 3: I have the answer: 1790.
Answer: 1790

Why ReAct Works Better

  • Transparency: The thought steps make the reasoning process traceable. You can see why the model chose a particular action.
  • Error correction: When an action returns an unexpected result, the model can adjust its approach instead of stubbornly sticking to a wrong strategy.
  • Grounding: Through actions (search, computation), answers are based on real data rather than hallucinations.
  • Flexibility: The pattern works with any tools β€” web search, databases, APIs, code execution.
Diagramm wird geladen...

ReAct in Today's Agent Frameworks

ReAct is the basis of virtually all AI agent frameworks today:

  • LangChain Agents: Implement the ReAct loop as the default agent type
  • Claude Tool Use: Anthropic's function calling follows the Thought-Action-Observation pattern
  • AutoGPT / CrewAI: Multi-agent systems where each agent internally uses ReAct
  • Claude Code: Also uses the ReAct pattern for code analysis and generation

Sources

  • Yao, S. et al. (2022). "ReAct: Synergizing Reasoning and Acting in Language Models." arXiv:2210.03629

Next step: move from knowledge to implementation

If you want more than theory: setups, workflows and templates from real operations for teams that want local, documented AI systems.

Why AI Engineering
  • Local and self-hosted by default
  • Documented and auditable
  • Built from our own runtime
  • Made in Austria
Not legal advice.