
Noah Snavely
@Jimantha
Followers
9K
Following
8K
Media
55
Statuses
882
3D vision fanatic. Professor @cornell_tech & Researcher @GoogleDeepmind. He or they. https://t.co/m7Rs5xUFfG
New York, NY
Joined June 2008
RT @joaocarreira: Scaling 4D Representations – new preprint and models now available
github.com
Contribute to google-deepmind/representations4d development by creating an account on GitHub.
0
42
0
RT @MorrisAlper: 💥New preprint! WildCAT3D uses tourist photos in-the-wild as supervision to learn to generate novel, consistent views of sc….
0
4
0
RT @gene_ch0u: We've released all code and models for FlashDepth! It produces depth maps from a 2k, streaming video in real-time. This was….
0
70
0
RT @holynski_: MegaSaM got an award! Big congrats to the team!!!!! 🥳🥳🎉🎉. @zhengqi_li, Richard, @forrestercole2, @jin_linyi, @QianqianWang5,….
0
6
0
RT @shiryginosar: Think LMMs can reason like a 3-year-old?. Think again!. Our Kid-Inspired Visual Analogies benchmark reveals where young c….
0
6
0
RT @Haian_Jin: Excited to attend #ICLR2025 in person this year! I’ll be presenting two papers:. 1. LVSM: A Large View Synthesis Model with….
0
3
0
RT @LuoRundong0122: 1/6 🔍➡️ How to transform standard videos into immersive 360° panoramas? We've designed a new AI system for video-to-360….
0
4
0
RT @jin_linyi: We have released the Stereo4D dataset! Explore the real-world dynamic 3D tracks:
github.com
Stereo4D dataset and processing code. Contribute to Stereo4d/stereo4d-code development by creating an account on GitHub.
0
38
0
RT @boyang_deng: Curious about how cities have changed in the past decade? We use MLLMs to analyse 40 million Street View images to answer….
0
14
0
RT @Haian_Jin: Our paper LVSM has been accepted as an oral presentation at #ICLR2025! See you in Singapore! . We’ve just released the code….
github.com
[ICLR 2025 Oral] Official code for "LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias" - Haian-Jin/LVSM
0
20
0
Really cool work from @ElorHadar and co!.
Deciphering some people’s writing can be a major challenge – especially when that writing is cuneiform characters imprinted into 3,000-year-old tablets. Now, researchers from @CornellCIS have developed an approach called ProtoSnap that “snaps” into place a prototype of a
0
1
17
RT @Haian_Jin: I can’t attend the #NeurIPS conference this year, but @ambie_kk will present Neural Gaffer in person. Drop by our poster at….
0
4
0
RT @jin_linyi: Introducing 👀Stereo4D👀. A method for mining 4D from internet stereo videos. It enables large-scale, high-quality, dynamic, *….
0
105
0
RT @zhengqi_li: Introducing MegaSaM! 🎥. Accurate, fast, & robust structure + camera estimation from casual monocular videos of dynamic scen….
0
91
0
RT @ducha_aiki: Extreme Rotation Estimation in the Wild.Hana Bezalel, Dotan Ankri, @ruojin8 @ElorHadar . tl;dr: MegaDepth/Scenes subset wit….
0
8
0
RT @3DVconf: #3DV2025AMA Third guest on the Ask Me Anything series: . Noah Snavely @Jimantha from Cornell & Google DeepMind! 🌟. 🕒 You have….
0
6
0
RT @gene_ch0u: We've released our paper "Generating 3D-Consistent Videos from Unposed Internet Photos"! Video models like Luma generate pre….
0
46
0