Technology
Word2Vec
Word2Vec is a shallow neural network model that efficiently converts words into dense, low-dimensional vectors (word embeddings), capturing semantic and syntactic relationships.
Word2Vec, developed by Tomas Mikolov and his team at Google in 2013, is a foundational Natural Language Processing technique. It utilizes a shallow, two-layer neural network to generate word embeddings: dense, multi-dimensional vectors that represent words. The model operates using two primary architectures: Continuous Bag-of-Words (CBOW), which predicts a word from its context, and Skip-gram, which predicts the context from a word. The core result is a vector space where words with similar meanings—like 'man' and 'woman'—are positioned closer together, allowing for arithmetic operations like 'King' - 'Man' + 'Woman' = 'Queen'.
Related technologies
Recent Talks & Demos
Showing 1-1 of 1