.

Technology

Segment Anything Model (SAM)

Meta AI's foundation model for promptable image segmentation: it delivers high-quality, zero-shot object masks from simple inputs like points or bounding boxes.

SAM (Segment Anything Model) is a computer vision foundation model from Meta AI, designed for the promptable segmentation task. Its architecture features a powerful image encoder, a flexible prompt encoder, and a fast mask decoder, enabling mask generation in near real-time (around 50ms). The model was trained on the massive SA-1B dataset, which includes over 1 billion masks across 11 million licensed images. This unprecedented scale gives SAM impressive zero-shot transfer capabilities, allowing it to accurately segment novel objects and images without task-specific fine-tuning.

https://github.com/facebookresearch/segment-anything
0 projects · 0 cities

Recent Talks & Demos

Showing 1-0 of 0

Members-Only

Sign in to see who built these projects

No public projects found for this technology yet.