Michelle Guo
@mshlguo
Followers
670
Following
178
Media
4
Statuses
23
Research Scientist @ Meta Superintelligence Labs | Stanford CS PhD
Joined December 2015
Today we're announcing SAM 3D, a foundation model for visually grounded 3D reconstruction. Super excited to share what my team has been working on! Try it here: https://t.co/aKlYajRGta Blog: https://t.co/ljcfqjRCP5 Paper: https://t.co/6huglEiNqV Code:
github.com
SAM 3D Objects. Contribute to facebookresearch/sam-3d-objects development by creating an account on GitHub.
Today we’re excited to unveil a new generation of Segment Anything Models: 1️⃣ SAM 3 enables detecting, segmenting and tracking of objects across images and videos, now with short text phrases and exemplar prompts. 🔗 Learn more about SAM 3: https://t.co/tIwymSSD89 2️⃣ SAM 3D
11
37
336
Re-creating the painting of Death of Socrates with SAM 3D Try it out here: https://t.co/KthgAe7JNE
0
1
8
We also dropped SAM 3D, two state-of-the art models that enable 3D reconstruction of objects and humans from a single image https://t.co/FrW2aIq6jB
ai.meta.com
This release introduces two new state-of-the-art models: SAM 3D Objects for object and scene reconstruction, and SAM 3D Body for human body and shape estimation.
11
8
87
SAM3D is such a big win for the computer vision and robotics communities! Congratulations to all my amazing colleagues @AIatMeta Witnessing this model grow and interacting with it during development has been a highlight of my time at Meta so far.
1
8
146
SAM 3D enables accurate 3D reconstruction from a single image, supporting real-world applications in editing, robotics, and interactive scene generation. Matt, a SAM 3D researcher, explains how the two-model design makes this possible for both people and complex environments.
4
9
96
The Segment Anything Playground is a new way to interact with media. Experiment with Meta’s most advanced segmentation models, including SAM 3 + SAM 3D, and discover how these capabilities can transform your creative projects and technical workflows. Check out some inspo and
22
46
254
Meta just dropped SAM 3D, but more interestingly, they basically cracked the 3D data bottleneck that's been holding the field back for years. Manually creating or scanning 3D ground truth for the messy real world is basically impossible at scale. But what if you just have
25
172
1K
We can now 3dfy any object from a single real world image. This has been a holy grail in computer vision for many decades. Try it here (you can upload your own images) https://t.co/SoxnIlNnRw and you can read the paper here https://t.co/dzrja5FXri . Enjoy!
aidemos.meta.com
A playground for interactive media
3
65
605
3Dfy anything from a single image! Very thrilled to announce SAM 3D. From an input image, select any object you want, 3Dfy it! Blog: https://t.co/wtQLAqXTzW Demo: https://t.co/tt3YqJlnRB
32
196
1K
See our project website https://t.co/VCMLqkHt5L for more details! This work would not have been possible without the incredible help and support from Matt, Igor, @sarafianosn, Hsiao-yu, @HalimiOshri, @BozicAljaz, @psyth91, @jiajunwu_cs, Karen, @TuurStuyck, @egorlarionov 🧵 4/4
0
0
3
After optimizing 3D splats and PBR appearance, we combine the best of both worlds -- 🔸A highpass of 3DGS (for pose-independent, volumetric details) 🔸A lowpass of PBR (for novel shading and illumination). 🧵 3/4
1
0
3
👗 Reconstructing photorealistic, simulation-ready garments is important for AR/VR. However, many methods require multi-frame tracking, which is expensive and remains a challenging problem. 💡 Instead, PGC reconstructs garments from a single static frame. 🧵2/4
1
0
2
🎉 Our paper "PGC: Physics-Based Gaussian Cloth from a Single Pose" has been accepted to #CVPR2025! 👕 PGC uses a PBR + 3DGS representation to render simulation-ready garments under novel lighting and motion, all from a single static frame. ✨Web: https://t.co/VCMLqkHt5L 🧵1/4
2
17
110
2020 was the year in which *neural volume rendering* exploded onto the scene, triggered by the impressive NeRF paper by Mildenhall et al. I wrote a post as a way of getting up to speed in a fascinating and very young field and share my journey with you:
dellaert.github.io
12
210
913
We made NeRF compositional! By learning object-centric neural scattering functions (OSFs), we can now compose dynamic scenes from captured images of objects. Website: https://t.co/QfUwqyaAiW Joint work with @alirezafathi @jiajunwu_cs Thomas Funkhouser
4
42
249
Another student of @drfeifei in the news: @Stanford's Michelle Guo is featured in BEST OF ECCV2018 on Computer Vision News, the magazine published by @RSIPVision, for her work on fewshot action recognition in 3D. https://t.co/XyJykJnuBW
@mshlguo #AI #ComputerVision #ECCV2018
0
3
6
Class of 2018 of @ai4allorg @Stanford !! 32 future leaders of human-centered AI technology and thought leadership!!
AI will change the world. Who will change AI? Thrilled for the first week of our @Stanford-@ai4allorg summer program on AI for high school students! Our daily blog covers our journey into the amazing world of AI for good! https://t.co/d8T4xEAKrr
1
15
74