c_richardt Profile Banner
Christian Richardt Profile
Christian Richardt

@c_richardt

Followers
2K
Following
13K
Media
124
Statuses
2K

Research Scientist at @RealityLabs. Working on novel-view synthesis etc. Previously @UniofBath, #IVCI @Saar_Uni, #MPI_Informatik, @Inria, @Cambridge_Uni.

Zurich, Switzerland
Joined January 2012
Don't wanna be here? Send us removal request.
@c_richardt
Christian Richardt
2 months
Super excited to finally share what weโ€™ve been working on: a universal feed-forward metric 3D reconstruction method we call MapAnything ๐Ÿš€
@Nik__V__
Nikhil Keetha
2 months
Meet MapAnything โ€“ a transformer that directly regresses factored metric 3D scene geometry (from images, calibration, poses, or depth) in an end-to-end way. No pipelines, no extra stages. Just 3D geometry & cameras, straight from any type of input, delivering new state-of-the-art
2
4
68
@mikeroberts3000
Mike Roberts
17 days
My group at @AdobeResearch is hiring PhD student interns. Please drop me a line if youโ€™re interested in spending some time at Adobe in SF in 2026. Iโ€™m especially interested in meeting you if youโ€™re currently using photorealistic synthetic data in your work ๐Ÿค“
12
41
254
@c_richardt
Christian Richardt
22 days
@holynski_ @GoogleDeepMind @Columbia @jampani_varun @taiyasaki @SFU @UofT Our final speaker in this workshop is Zan Gojcic (@ZGojcic) from @nvidia whoโ€™s connecting 3D Reconstruction with Novel-View Synthesis for Generative Scene Completion.
0
1
10
@c_richardt
Christian Richardt
22 days
@holynski_ @GoogleDeepMind @Columbia @jampani_varun Next up is Andrea Tagliasacchi (@taiyasaki) from @SFU and @UofT, whoโ€™s telling us how to properly handle Uncertainty in Radiance Fields.
1
0
2
@c_richardt
Christian Richardt
23 days
@holynski_ @GoogleDeepMind @Columbia Weโ€™re back after the coffee break with Varun Jampani (@jampani_varun) from Arcade AI, who presents his teamโ€™s latest work on controllable video diffusion and beyond.
1
1
2
@c_richardt
Christian Richardt
23 days
Our next speaker is Aleksander Holynsky (@holynski_) from @GoogleDeepMind and @Columbia, whoโ€™s taking us from his favourite Black Mirror episode to Generative Nostalgia.
1
1
11
@ethanjohnweber
Ethan Weber
1 month
Iโ€™m especially excited about an interactive graph weโ€™re building for the community. Our goal is to spark discussion and encourage works in the upper right โ€” where we have abundant input views but still need to generate significant missing content. ๐Ÿ™‚ https://t.co/j7Ew79PVgy
1
6
45
@c_richardt
Christian Richardt
23 days
Angela Dai (@angelaqdai) is now talking about her work on Completing 3D Scene Geometry
1
0
2
@c_richardt
Christian Richardt
23 days
Our first speaker is Peter Kontschieder from Meta Reality Labs, who is talking about the Quest for the Photorealistic Metaverse.
1
0
2
@c_richardt
Christian Richardt
23 days
Weโ€™re kicking off our @ICCVConference 2025 Workshop on Generative Scene Completion for Immersive Worlds ๐ŸŒ in Honolulu this morning! ๐Ÿ“ Room 301B, Hawaiโ€™i Convention Center ๐ŸŒ https://t.co/2L0iNzJ5KV
1
0
17
@JonathonLuiten
Jonathon Luiten
24 days
Tune in @ ICCV on Mon @ 10.30am where I talk about everything 3D + Realism: - Hyperscape: Gaussian Splatting in VR - FlowR: Flowing from Sparse-2-Dense 3D Recon - BulletGen: Improving 4D Recon with Bullet-Time Gen - MapAnything: Universal Feed-Forward Metric 3D Recon ๐Ÿงต๐Ÿ‘‡
2
16
175
@wjscheirer
Walter Scheirer
29 days
The #ICCV2025 main conference open access proceedings is up: https://t.co/hoqMwLPQZ1 Workshop papers will be posted shortly. Aloha!
0
10
52
@c_richardt
Christian Richardt
1 month
Join us at our @ICCVConference 2025 Workshop SceneComp โ€“ Generative Scene Completion for Immersive Worlds ๐ŸŒ โ€“ we have an exciting programme for you! ๐Ÿ“… Monday, 20 October 2025 (am) ๐ŸŒ
Tweet card summary image
scenecomp.github.io
Generative Scene Completion for Immersive Worlds
@ethanjohnweber
Ethan Weber
1 month
๐Ÿ“ข SceneComp @ ICCV 2025 ๐Ÿ๏ธ ๐ŸŒŽ Generative Scene Completion for Immersive Worlds ๐Ÿ› ๏ธ Reconstruct what you know AND ๐Ÿช„ Generate what you donโ€™t! ๐Ÿ™Œ Meet our speakers @angelaqdai, @holynski_, @jampani_varun, @ZGojcic @taiyasaki, Peter Kontschieder https://t.co/LvONYIK3dz #ICCV2025
0
0
13
@c_richardt
Christian Richardt
1 month
@CVPR Thanks for all the interest. The signup deadline has now passed.
0
0
0
@Heaney555
David Heaney
2 months
"I wish there was a modern Bell Labs" "WTF, why is Meta WASTING $100 billion on AR & VR?"
54
98
4K
@ethanjohnweber
Ethan Weber
2 months
This is from the team I joined in June! This is only the start of the immersive scenes vision we have. ๐Ÿ˜ If you're interested in working with us, feel free to reach out! ๐Ÿฅฝ๐Ÿ™‚ Some of us will be at #ICCV2025 as well for a relevant workshop! https://t.co/Ct8zl0adjW ๐Ÿ๏ธ
Tweet card summary image
scenecomp.github.io
Generative Scene Completion for Immersive Worlds
@MetaHorizonDevs
Meta Horizon Developers
2 months
Turn any room into an immersive world ๐ŸŒโœจ At #MetaConnect, we shared how Hyperscape Capture (Beta) lets you capture physical spaces on Meta Quest in minutes and transform them into photorealistic environments ๐Ÿคฏ See it in the Meta Horizon Store ๐Ÿ‘‰ https://t.co/XElaYPJxNj
0
3
43
@JonathonLuiten
Jonathon Luiten
2 months
Introducing: Hyperscape Capture ๐Ÿ“ท Last year we showed the world's highest quality Gaussian Splatting, and the first time GS was viewable in VR. Now, capture your own Hyperscapes, directly from your Quest headset in only 5 minutes of walking around. https://t.co/wlHmtRiANy
@JonathonLuiten
Jonathon Luiten
1 year
Hyperscape: The future of VR and the Metaverse Excited that Zuckerberg @finkd announced what I have been working on at Connect. Hyperscape enables people to create high fidelity replicas of physical spaces, and embody them in VR. Check out the demo app: https://t.co/TcRRUfymoc
41
284
2K
@c_richardt
Christian Richardt
2 months
Interested in reviewing for @CVPR 2026? We are looking for anyone with prior publication experience at venues like CVPR that is interested and may not be on our list of reviewers yet. If that is you, please reach out!
18
8
47
@RfLiang
Ruofan Liang
2 months
๐Ÿ’ก Introducing LuxDiT: a diffusion transformer (DiT) that estimates realistic scene lighting from a single image or video. It produces accurate HDR environment maps, addressing a long-standing challenge in computer vision. ๐Ÿ”—Paper: https://t.co/6cW6WlREBl
3
58
274
@AIatMeta
AI at Meta
3 months
Introducing DINOv3: a state-of-the-art computer vision model trained with self-supervised learning (SSL) that produces powerful, high-resolution image features. For the first time, a single frozen vision backbone outperforms specialized solutions on multiple long-standing dense
351
783
5K