Technology
CoT
A prompting technique that forces LLMs to output intermediate reasoning steps to solve complex logic and math problems.
Jason Wei and the Google Brain team published the definitive CoT paper in 2022. The technique prompts LLMs to generate a sequence of intermediate logical steps before reaching a conclusion. It solves the performance gap in multi-step tasks: on the GSM8K math benchmark, PaLM 540B jumped from 18 percent to 57 percent accuracy using CoT. This method (often triggered by the phrase: Let's think step by step) makes the model's reasoning transparent and verifiable. It is a standard requirement for high-accuracy workflows in GPT-4 and Claude 3.5 Sonnet.
5 projects
·
5 cities
Related technologies
Recent Talks & Demos
Showing 1-5 of 5
OpenSymbolicAI: Python Execution Plans
Seattle
Mar 9
OpenSymbolicAI
Python
Words to World: AI Models
San Diego
Feb 26
Unreal Engine 5
PyTorch
InkyCards: Stunning UI in 7 Days
Cologne
Jan 21
Claude Code
Gemini
VisionGuard: Remote AI Vision Testing
Hamburg
Aug 14
React
Python
Mistral 7B On-Premise Wi-Fi Agent
Medellín
Jun 26
Gemini
Mistral 7B