Reflection Agent: Self-Improving AI Through Iterative Critique

AI
Agents
LLM
Design Patterns
Learn the Reflection Pattern: an agent that evaluates and improves its own output through iterative self-critique loops.
Author

Ousmane Cissé

Published

January 10, 2026

AI Audio Version

⚡ Quick Summary (0m09)

The Reflection Pattern is the simplest yet surprisingly effective agentic design pattern. It allows an AI agent to evaluate its own output and iteratively improve it — like having an infinite pair-review loop built into the system.

Introduction to Reflection Agent

A Reflection Agent is an agent that thinks about its own outputs or decisions, evaluates their quality or correctness, and then revises or improves them. This is inspired by metacognition—thinking about thinking. The agent uses its own generated critique or structured prompts to self-evaluate and refine its responses.

Key Characteristics

  • Generates initial output

  • Critiques or reflects on that output

  • Refines or rewrites based on the reflection

  • Often operates in a two-pass loop: generate → reflect → improve

  • Can be part of self-correcting, ReAct, or Chain-of-Thought workflows

Analogy

Think of a student writing an essay:

  • Writes the first draft

  • Reads it and thinks, “Hmm, this part is vague.”

  • Revises that section for clarity

That’s reflection in action.

Simple Python Example

Code
def generate_summary(text):
    return f"Summary: {text[:50]}..."

def reflect_on_summary(summary):
    if "..." in summary:
        return "This summary seems incomplete or too short."
    return "Summary looks good."

def improve_summary(text, reflection):
    if "incomplete" in reflection:
        return f"Improved Summary: {text[:100]}..."
    return generate_summary(text)

# Input text
text = "Artificial Intelligence (AI) enables machines to learn from data, adapt to new inputs, and perform tasks that typically require human intelligence."

# Run reflection agent
summary = generate_summary(text)
reflection = reflect_on_summary(summary)
improved = improve_summary(text, reflection)

print(improved)
Improved Summary: Artificial Intelligence (AI) enables machines to learn from data, adapt to new inputs, and perform t...

Common Reflection Loop

  1. Task: Generate a response or solution

  2. Self-critique: Evaluate it (e.g., clarity, correctness, completeness)

  3. Refine: Use critique to revise the original

This loop can run once or multiple times.

LLM-Based Example (High-Level)

Prompt format:

Step 1: Write a short blog intro

Step 2: Critique it (Was it engaging? Too vague?)

Step 3: Rewrite it using the feedback

You can use LangChain or CrewAI to chain these steps into a multi-agent or single-agent pipeline.

Use Cases

  • Self-revising writing agents (blogs, summaries, essays)

  • Code improvement agents (e.g., Copilot → Critique → Fix)

  • Research agents that review and refine findings

  • Teaching agents that reflect on student misconceptions

  • Email agents that rewrite for tone, clarity, and grammar

Key Prompt Pattern for LLMs

Task: Summarize the following text.

Reflection: Is the summary accurate, complete, and clear?

Improved Output: Rewrite the summary based on your reflection.

Projects

GitHub

  • Self-Improving Summary Generator
  • Essay Evaluator Agent
  • Code Reviewer Agent
  • Story Improver Agent
  • Idea Evaluator
Series Agentic Design Patterns
Back to top