Technology
GPT-3
A 175-billion parameter autoregressive language model that masters complex tasks through few-shot learning.
OpenAI debuted GPT-3 in 2020: a transformer-based engine trained on 570GB of filtered text. It utilizes 175 billion parameters to execute diverse functions (including Python scripting and logical reasoning) using only natural language prompts. This architecture removed the requirement for task-specific fine-tuning: establishing the foundation for modern tools like GitHub Copilot and the initial ChatGPT release.
391 projects
·
79 cities
Related technologies
Recent Talks & Demos
Showing 121-144 of 391
Unsupervised Categorization
Houston
Sep 9
PyRIT: LLM Prompt Hacking Tool
Miami
Sep 3
AI Slop Factory: Wikipedia to TikTok
London
Aug 28
Zine: MCP Knowledge Curation
Seattle
Aug 27
Bestie: Center of Every Group
Seattle
Aug 27
CrewAI: Automated Book Writing
Boston
Aug 25
PROTOSTAR: Scaling Alert Processing
Boston
Aug 25
LLM Safety: Model vs Prompt
Dubai
Aug 23
YouTube Knowledge Graph Analysis
Dubai
Aug 23
Evals: KPIs to CI/CD
Pune
Aug 23
AI in compliance
Pune
Aug 23
PaddlePaddle: Structuring Legal Docs
Hong Kong
Aug 22
MERO: Frames to AI Prompts
Medellín
Aug 21
OpenAI Real-Time Voice Agents
Fort Wayne
Aug 19
GPT-4
OpenAI Realtime API
Hyground: Multi-Agent Incident Resolution
Hamburg
Aug 14
LLM Dialogue: Memory and TTS Demo
Hamburg
Aug 14
AI: Johari Window to Quantum
Orange County
Jul 31
Red-Team AI Model Safety
Atlanta
Jul 31
Sistema Inmunológico Adopción IA
Santiago
Jul 31
LangGraph Agents for SRE Automation
Hong Kong
Jul 31
Qwen: Bending LLMs into Tools
Hong Kong
Jul 31
PlexifyAEC: Construction Digital Twins
New York City
Jul 29
Self-Improving Agents Fix Mistakes
Munich
Jul 25
Timee: Automating Real-World Chores
Seattle
Jul 24