.

Technology

BERT

BERT (Bidirectional Encoder Representations from Transformers) is a foundational, pre-trained NLP model that uses a Transformer encoder to process text bidirectionally, capturing full word context for superior language understanding.

BERT is a revolutionary language representation model introduced by Google AI Language in 2018. It is built on the Transformer architecture and distinguishes itself by being deeply bidirectional: it processes the entire sequence of words (left and right context) simultaneously, unlike previous unidirectional models. This capability is achieved through a Masked Language Model (MLM) pre-training objective. The model, released in sizes like BERTBASE (110 million parameters) and BERTLARGE (340 million parameters), dramatically improved the state-of-the-art across 11+ Natural Language Processing tasks, including question answering (SQuAD) and sentiment analysis, establishing a new baseline for the field.

https://arxiv.org/abs/1810.04805
186 projects · 49 cities

Related technologies

Recent Talks & Demos

Showing 61-84 of 186

Members-Only

Sign in to see who built these projects

SAEs for LLM Steering
Mumbai Nov 23
Sparse Autoencoders GPT-4
Vertical AI Construction Prompt Engineering
Munich Nov 21
GPT-4 Prompt Engineering
Freshflow: LLMs for Produce Data
Munich Nov 21
GPT-4 GPT-3
Beyond Presence: Hyper-Realistic Avatars
Munich Nov 21
GPT-4 RAG
DeepX Hub
Palo Alto Nov 20
DeepX Hub TensorFlow
LLMs for Branching Storytelling
Singapore Nov 19
Anthropic Gemini
Cartograph
Singapore Nov 19
Cartograph TensorFlow
Olympus: Enterprise AI Agents
New York City Nov 12
Olympus Cursor
React Compiler for LLMs
Amsterdam Nov 12
React GPT-4
Automated Video Editing with LLMs
Amsterdam Nov 12
Whisper FFmpeg
Evaluating AI Agents in Finance
London Oct 31
Generative models Agentic Pipelines
Tilmoch: AI for Agglutinative Languages
Tashkent Oct 31
Grammarly DeepL
UzbekVoice
Tashkent Oct 31
Google Cloud Speech-to-Text Google Cloud Text-to-Speech
Handi Hub: AI for Artisans
Tashkent Oct 31
TensorFlow PyTorch
Pieces: Long-Term Developer Memory
Cincinnati Oct 30
RAG GPT-4
GraphRAG: Improving RAG Accuracy
Bogotá Oct 30
RAG GraphRAG
Agentic LLMs for Data Enrichment
Montreal Oct 29
Weaviate SQL
Cooktok
Montreal Oct 29
GPT-4 JSON
LLM Evaluation Labeling Workflow
Seattle Oct 24
OpenPipe GPT-4
Listen Labs
San Francisco Oct 22
Listen PowerPoint
Hypothesis Sage: Agentic RAG Statistics
Chicago Oct 22
RAG GPT-4
Claude Vision: zk-Proofs from Browsers
New York City Oct 21
Claude Vision zk-SNARKs
Claude: Finetuning Art Recognition
New York City Oct 21
Claude MetGuessr
Lidar Processing for Machine Learning
Fort Wayne Oct 15
LiDAR Point Cloud