AI at Meta
@AIatMeta
Followers
734K
Following
2K
Media
1K
Statuses
3K
Together with the AI community, we are pushing the boundaries of what’s possible through open science to create a more connected world.
Joined August 2018
Today we’re excited to unveil a new generation of Segment Anything Models: 1️⃣ SAM 3 enables detecting, segmenting and tracking of objects across images and videos, now with short text phrases and exemplar prompts. 🔗 Learn more about SAM 3: https://t.co/tIwymSSD89 2️⃣ SAM 3D
131
602
4K
We’re in San Diego this week for #NeurIPS2025! Stop by the Meta booth (#1223) to meet our team and check out: 🔎 Demos of our latest research including DINOv3 and UMA ⚡ Lightning talks from researchers behind SAM 3, Omnilingual ASR and more (see schedule below) 👓 Hands-on
39
39
452
SAM 3D is helping advance the future of rehabilitation. See how researchers at @CarnegieMellon are using SAM 3D to capture and analyze human movement in clinical settings, opening the doors to personalized, data-driven insights in the recovery process. 🔗 Learn more about SAM
33
87
474
We partnered with @ConservationX to build the SA-FARI dataset with 10,000+ annotated videos including over 100 species of animals. We’re sharing this dataset to help with conservation efforts around the globe. 🔗 Find it here:
conservationxlabs.com
Wildlife video dataset with segmentation, bounding boxes, and annotations. Made using Meta's SAM3. 10,000+ meticulously annotated videos. 99 species.
3
4
42
SAM 3’s ability to precisely detect and track objects is helping @ConservationX measure the survival of animal species around the world and prevent their extinction. 🔗 Learn more about the work: https://t.co/cAvKP7bLCI
18
39
265
As a part of the SAM 3 release, we’ve partnered with @roboflow to accelerate applying SAM 3 for real-world use cases. This includes automating visual labeling tasks, fine-tuning for novel use cases and deploying an API. Highlights of what you can do with SAM 3 on Roboflow🧵
14
39
413
My prediction: we’re about to see hundreds of robotics papers built on SAM3D. Waiting for the first one to drop 👀 SAM3 & SAM3D from @metaai are just too good to be true!
1
0
6
Here are some of our favorite examples that we’ve seen so far ⬇️ https://t.co/u67MuRbH9g
中々の精度です! Metaが画像から瞬時にオブジェクトの3D化や人体形状の推定を可能とするAI技術を発表! ブラウザ上でお試し可能! SAM 3D https://t.co/IDChRVpb7G
#SAM3D
1
0
6
Here are a few tips to help you get started with SAM 3D in the Playground: 1️⃣ We encourage you to add multiple objects to build out a scene and apply effects to them to experience the full potential of SAM 3D. 2️⃣ When generating 3D models of multiple objects or people in a
2
1
9
The Segment Anything Playground is a new way to interact with media. Experiment with Meta’s most advanced segmentation models, including SAM 3 + SAM 3D, and discover how these capabilities can transform your creative projects and technical workflows. Check out some inspo and
22
45
254
We’re advancing on-device AI with ExecuTorch, now deployed across devices including Meta Quest 3, Ray-Ban Meta, Oakley Meta Vanguard and Meta Ray-Ban Display. By eliminating conversion steps and supporting pre-deployment validation in PyTorch, ExecuTorch accelerates the path
22
92
773
Collecting a high quality dataset with 4M unique phrases and 52M corresponding object masks helped SAM 3 achieve 2x the performance of baseline models. Kate, a researcher on SAM 3, explains how the data engine made this leap possible. 🔗 Read the SAM 3 research paper:
10
43
382
SAM 3D enables accurate 3D reconstruction from a single image, supporting real-world applications in editing, robotics, and interactive scene generation. Matt, a SAM 3D researcher, explains how the two-model design makes this possible for both people and complex environments.
4
9
96
SAM 3 tackles a challenging problem in vision: unifying a model architecture for detection and tracking. Christoph, a researcher on SAM 3, shares how the team made it possible. 🔗 Read the SAM 3 research paper: https://t.co/6b7VkmKr9k
1
6
44
We've partnered with @Roboflow to enable people to annotate data, fine-tune, and deploy SAM 3 for their particular needs. Try it here:
roboflow.com
Everything you need to build and deploy computer vision models, from automated annotation tools to high-performance deployment solutions.
4
30
302
We’re sharing SAM 3 under the SAM License so others can use it to build their own experiences. Alongside the model, we’re releasing a new evaluation benchmark, model checkpoint, and open-source code for inference and fine-tuning. These resources are designed to support advanced
github.com
The repository provides code for running inference and finetuning with the Meta Segment Anything Model 3 (SAM 3), links for downloading the trained model checkpoints, and example notebooks that sho...
3
8
84
Meet SAM 3, a unified model that enables detection, segmentation, and tracking of objects across images and videos. SAM 3 introduces some of our most highly requested features like text and exemplar prompts to segment all objects of a target category. Learnings from SAM 3 will
28
141
947
We’re sharing model checkpoints, an evaluation benchmark, human body training data, and inference code with the community to support creative applications in fields like robotics, interactive media, science, sports medicine, and beyond. 🔗 SAM 3D Body:
github.com
The repository provides code for running inference with the SAM 3D Body Model (3DB), links for downloading the trained model checkpoints and datasets, and example notebooks that show how to use the...
7
11
237
Introducing SAM 3D, the newest addition to the SAM collection, bringing common sense 3D understanding of everyday images. SAM 3D includes two models: 🛋️ SAM 3D Objects for object and scene reconstruction 🧑🤝🧑 SAM 3D Body for human pose and shape estimation Both models achieve
129
1K
7K