Technology
Foundational Models
Massive, pre-trained AI models: they serve as the adaptable base (foundation) for numerous downstream applications via transfer learning.
Foundation Models (FMs) represent a paradigm shift: a single, massive AI model trained on a vast, general dataset. This pre-training allows FMs to be adapted (via fine-tuning or prompting) for a wide array of specialized tasks: natural language processing (NLP), image generation, and code completion. Key examples include OpenAI's GPT-4, a transformer architecture with an estimated 1.7 trillion parameters, and Google's BERT. The core value is transfer learning: FMs eliminate the need to train task-specific models from scratch, significantly cutting development time and compute resources for new applications.
Related technologies
Recent Talks & Demos
Showing 1-3 of 3