Bi-encoders Projects .

Technology

Bi-encoders

Bi-encoders process two inputs independently into a shared vector space to enable ultra-fast similarity search and large-scale document retrieval.

Bi-encoders (like SBERT or Sentence-Transformers) map text inputs into dense fixed-size vectors using twin transformer networks. By decoupling the encoding process, these models allow for pre-computing millions of document embeddings and storing them in a vector database (like Pinecone or Milvus). When a query arrives, the system only needs to encode the search string once and perform a cosine similarity check. This architecture is the backbone of modern Semantic Search and RAG (Retrieval-Augmented Generation) pipelines, delivering sub-100ms latency on massive datasets where Cross-encoders would be computationally prohibitive.

https://www.sbert.net/docs/pretrained_models.html
0 projects · 0 cities

Recent Talks & Demos

Showing 1-0 of 0

Members-Only

Sign in to see who built these projects

No public projects found for this technology yet.