.

Technology

Large Language Model

Large Language Models (LLMs) are transformer-based neural networks, trained on massive text data, that predict and generate human-quality language and code.

LLMs are deep learning models (e.g., GPT-4, Llama 3) built on the transformer architecture, which uses a self-attention mechanism to process massive, diverse datasets . These models, containing billions to trillions of parameters, function as general-purpose sequence predictors: they calculate the most statistically probable next token in a sequence . This core capability enables diverse applications, including summarization, code generation, translation, and complex reasoning (chain-of-thought) . While powerful, LLMs require significant compute resources and carry risks like generating false information (hallucinations) .

https://en.wikipedia.org/wiki/Large_language_model
26 projects · 27 cities

Related technologies

Recent Talks & Demos

Showing 21-26 of 26

Members-Only

Sign in to see who built these projects