Technology
Ministral-3
Mistral AI’s premier sub-10B parameter models engineered for high-performance edge computing and low-latency local inference.
The Ministra-3 family (comprising the 3B and 8B models) sets a new benchmark for on-device intelligence. These models support a 128k token context window and outperform Llama 3.2 3B on critical metrics: including MMLU and GSM8K. Built for efficiency: they handle complex reasoning and multilingual tasks directly on consumer hardware (like laptops and mobile devices). Deploy them for privacy-first local agents or as high-speed routers within larger AI orchestration pipelines.
1 project
·
1 city
Related technologies
Recent Talks & Demos
Showing 1-1 of 1