Reflection Agent: Self-Improving AI Through Iterative Critique
AI
Agents
LLM
Design Patterns
Learn the Reflection Pattern: an agent that evaluates and improves its own output through iterative self-critique loops.
Author
Ousmane Cissé
Published
January 10, 2026
AI Audio Version
⚡ Quick Summary (0m09)
The Reflection Pattern is the simplest yet surprisingly effective agentic design pattern. It allows an AI agent to evaluate its own output and iteratively improve it — like having an infinite pair-review loop built into the system.
Introduction to Reflection Agent
A Reflection Agent is an agent that thinks about its own outputs or decisions, evaluates their quality or correctness, and then revises or improves them. This is inspired by metacognition—thinking about thinking. The agent uses its own generated critique or structured prompts to self-evaluate and refine its responses.
Key Characteristics
Generates initial output
Critiques or reflects on that output
Refines or rewrites based on the reflection
Often operates in a two-pass loop: generate → reflect → improve
Can be part of self-correcting, ReAct, or Chain-of-Thought workflows
Analogy
Think of a student writing an essay:
Writes the first draft
Reads it and thinks, “Hmm, this part is vague.”
Revises that section for clarity
That’s reflection in action.
Simple Python Example
Code
def generate_summary(text):returnf"Summary: {text[:50]}..."def reflect_on_summary(summary):if"..."in summary:return"This summary seems incomplete or too short."return"Summary looks good."def improve_summary(text, reflection):if"incomplete"in reflection:returnf"Improved Summary: {text[:100]}..."return generate_summary(text)# Input texttext ="Artificial Intelligence (AI) enables machines to learn from data, adapt to new inputs, and perform tasks that typically require human intelligence."# Run reflection agentsummary = generate_summary(text)reflection = reflect_on_summary(summary)improved = improve_summary(text, reflection)print(improved)
Improved Summary: Artificial Intelligence (AI) enables machines to learn from data, adapt to new inputs, and perform t...
Common Reflection Loop
Task: Generate a response or solution
Self-critique: Evaluate it (e.g., clarity, correctness, completeness)
Refine: Use critique to revise the original
This loop can run once or multiple times.
LLM-Based Example (High-Level)
Prompt format:
Step 1: Write a short blog intro
Step 2: Critique it (Was it engaging? Too vague?)
Step 3: Rewrite it using the feedback
You can use LangChain or CrewAI to chain these steps into a multi-agent or single-agent pipeline.