.

Technology

Segment Anything Model

SAM is Meta AI's foundation model for promptable image segmentation, delivering robust, zero-shot object mask generation in real-time.

Segment Anything Model (SAM) is a breakthrough foundation model from Meta AI, designed to democratize image segmentation. It operates on a promptable task: input a point, box, or text cue, and the model rapidly outputs a high-quality segmentation mask (amortized real-time speed). The architecture utilizes a powerful image encoder and a fast, lightweight prompt encoder/mask decoder. SAM was trained on the massive SA-1B dataset, featuring over 1.1 billion masks across 11 million licensed images, enabling strong zero-shot generalization across diverse visual domains and downstream tasks without requiring further fine-tuning.

https://github.com/facebookresearch/segment-anything
6 projects · 6 cities

Related technologies

Recent Talks & Demos

Showing 1-6 of 6

Members-Only

Sign in to see who built these projects