Technology
text-embedding-3-small
This is the premier, cost-optimized embedding model: superior performance over `ada-002` at an 80% lower cost.
The `text-embedding-3-small` model is the new baseline for high-efficiency vector generation (1536 dimensions by default). It delivers a massive cost reduction: $0.02 per million tokens, down from the predecessor's $0.10. Performance is substantially improved, showing a 13-point gain on the MIRACL multilingual benchmark. Crucially, it incorporates Matryoshka Representation Learning (MRL): this allows you to truncate the vector to smaller sizes (e.g., 512 dimensions) for faster search and reduced storage, all while retaining core semantic integrity. Deploy this model for high-volume semantic search, clustering, and RAG applications.
Related technologies
Recent Talks & Demos
Showing 1-2 of 2