Technology
In-Context Learning
LLMs adapt to a new task by leveraging examples within the input prompt (few-shot prompting), eliminating costly parameter updates.
In-Context Learning (ICL) is an emergent Large Language Model (LLM) capability: it allows models like GPT-3 to perform novel tasks based on demonstrations provided directly in the prompt (e.g., zero-, one-, or few-shot examples). The core mechanism involves conditioning the pre-trained model on this temporary context, which guides the output without requiring backpropagation or model weight updates. This approach delivers rapid, flexible task adaptation, significantly reducing the time and computational resources associated with traditional fine-tuning.
Related technologies
Recent Talks & Demos
Showing 1-2 of 2