Technology
Mistral AI
Mistral AI builds high-performance, open-weight language models that prioritize computational efficiency and developer autonomy.
Based in Paris and founded by veterans from Meta and DeepMind, Mistral AI delivers frontier-class LLMs like Mistral 7B and Mixtral 8x22B. Their sparse Mixture-of-Experts (MoE) architecture allows Mixtral 8x7B to outperform Llama 2 70B on most benchmarks while using fewer active parameters. The flagship Mistral Large model features a 128k context window and native multilingual support (French, German, Spanish, Italian, and English). Developers integrate these models through La Plateforme or deploy them locally to maintain full control over proprietary data.
2 projects
·
3 cities
Related technologies
Recent Talks & Demos
Showing 1-2 of 2