Technology
Foundation models
Liquid AI's 1.2B and 2.6B parameter models deliver high-density performance for edge devices using a memory-efficient, non-transformer architecture.
These Cactus models replace traditional Transformers with Liquid Foundation Models (LFMs) based on linear state-space designs. The 1.2B and 2.6B variants beat larger competitors (including Llama 3.2-3B) in benchmarks while maintaining a minimal memory footprint. They support 32k token context windows without the quadratic scaling costs of self-attention. This makes them the top choice for on-device AI, robotics, and secure enterprise applications where hardware resources are limited.
8 projects
·
8 cities
Related technologies
Recent Talks & Demos
Showing 1-8 of 8
Apple AI in Shortcuts
Seattle
Dec 18
iOS Shortcuts
iOS 26
Benchmarking LLMs for Fraud Detection
Minneapolis Saint Paul
Sep 10
AWS Bedrock
LangChain
Hyground: Multi-Agent Incident Resolution
Hamburg
Aug 14
AWS Bedrock
Anthropic Claude
out.sg Rec Engine and kew
Singapore
Jan 10
Foundation models
Kew
KolateAI: Clinical Events Prediction
New York City
Jul 24
Foundation models
Co-pilot
Video Foundation Models: Video First
Denver
Nov 22
Foundation models
GPT-4
Twelve Labs: Video Foundation Model
San Francisco
Aug 9
Twelve Labs
Foundation models
autodistill: Auto-labeling Model Distillation
San Francisco
Aug 9
autodistill
SAM