Technology
Segment Anything
Segment Anything Model (SAM): Meta's zero-shot foundation model for image segmentation, instantly generating high-quality masks from simple prompts.
Segment Anything (SA) is a groundbreaking computer vision project from Meta's FAIR lab, centered on the Segment Anything Model (SAM). SAM is a vision foundation model, trained on the massive SA-1B dataset (over 1.1 billion masks across 11 million images), enabling robust zero-shot generalization to new segmentation tasks. It operates on a 'promptable segmentation' principle: users provide input (a click, a bounding box, or text) and the model instantly outputs precise object masks (in ~50ms). This architecture separates the image encoder from the fast prompt encoder/mask decoder, allowing real-time interactive use and establishing a new benchmark for versatile, adaptable image analysis.
Related technologies
Recent Talks & Demos
Showing 1-4 of 4