Technology
AI Self-critique
AI Self-critique is the iterative process where a Large Language Model (LLM) evaluates its own initial output, identifies flaws, and revises the response for improved accuracy and quality.
This technique, often called the Reflection Pattern (championed by Andrew Ng), enables models like OpenAI's GPT-4 and xAI's Grok 3 to systematically review and refine their work. The process involves an initial generation, a self-reflection phase (critique), and a final refinement step: it mimics human iterative thinking. This self-correction dramatically boosts performance: benchmark tests like MMLU have shown up to a 15% performance increase on complex tasks, and Google has reported a 20% clarity improvement in AI-generated reports. It is a core component in advanced prompt engineering and agentic workflows (e.g., Constitutional AI), ensuring more reliable, precise, and dependable outputs across coding, analysis, and content creation.
Related technologies
Recent Talks & Demos
Showing 1-1 of 1