Technology
BERT models
BERT (Bidirectional Encoder Representations from Transformers) is the deep, bidirectional Transformer model that set a new state-of-the-art standard for natural language processing (NLP) tasks.
BERT (Bidirectional Encoder Representations from Transformers) is the influential language model released by Google AI in 2018. Its core technical innovation is the use of a deep, *bidirectional* training approach—specifically, Masked Language Modeling (MLM)—which allows it to learn context from both the left and right sides of a word simultaneously (unlike previous models). Built on the encoder-only Transformer architecture, BERT was pre-trained on massive text corpora (like the 800M word BookCorpus and 2,500M word Wikipedia) in two main sizes: BERT-Base (110 million parameters) and BERT-Large (340 million parameters). This pre-training enables efficient fine-tuning for a wide array of downstream NLP applications, including question answering (SQuAD), sentiment analysis, and named entity recognition.
Related technologies
Recent Talks & Demos
Showing 1-1 of 1