Christian Richardt
@c_richardt
Followers
2K
Following
13K
Media
124
Statuses
2K
Research Scientist at @RealityLabs. Working on novel-view synthesis etc. Previously @UniofBath, #IVCI @Saar_Uni, #MPI_Informatik, @Inria, @Cambridge_Uni.
Zurich, Switzerland
Joined January 2012
Super excited to finally share what weโve been working on: a universal feed-forward metric 3D reconstruction method we call MapAnything ๐
Meet MapAnything โ a transformer that directly regresses factored metric 3D scene geometry (from images, calibration, poses, or depth) in an end-to-end way. No pipelines, no extra stages. Just 3D geometry & cameras, straight from any type of input, delivering new state-of-the-art
2
4
68
My group at @AdobeResearch is hiring PhD student interns. Please drop me a line if youโre interested in spending some time at Adobe in SF in 2026. Iโm especially interested in meeting you if youโre currently using photorealistic synthetic data in your work ๐ค
12
41
254
@holynski_ @GoogleDeepMind @Columbia @jampani_varun @taiyasaki @SFU @UofT Our final speaker in this workshop is Zan Gojcic (@ZGojcic) from @nvidia whoโs connecting 3D Reconstruction with Novel-View Synthesis for Generative Scene Completion.
0
1
10
@holynski_ @GoogleDeepMind @Columbia @jampani_varun Next up is Andrea Tagliasacchi (@taiyasaki) from @SFU and @UofT, whoโs telling us how to properly handle Uncertainty in Radiance Fields.
1
0
2
@holynski_ @GoogleDeepMind @Columbia Weโre back after the coffee break with Varun Jampani (@jampani_varun) from Arcade AI, who presents his teamโs latest work on controllable video diffusion and beyond.
1
1
2
Our next speaker is Aleksander Holynsky (@holynski_) from @GoogleDeepMind and @Columbia, whoโs taking us from his favourite Black Mirror episode to Generative Nostalgia.
1
1
11
Iโm especially excited about an interactive graph weโre building for the community. Our goal is to spark discussion and encourage works in the upper right โ where we have abundant input views but still need to generate significant missing content. ๐ https://t.co/j7Ew79PVgy
1
6
45
Our first speaker is Peter Kontschieder from Meta Reality Labs, who is talking about the Quest for the Photorealistic Metaverse.
1
0
2
Weโre kicking off our @ICCVConference 2025 Workshop on Generative Scene Completion for Immersive Worlds ๐ in Honolulu this morning! ๐ Room 301B, Hawaiโi Convention Center ๐ https://t.co/2L0iNzJ5KV
1
0
17
Tune in @ ICCV on Mon @ 10.30am where I talk about everything 3D + Realism: - Hyperscape: Gaussian Splatting in VR - FlowR: Flowing from Sparse-2-Dense 3D Recon - BulletGen: Improving 4D Recon with Bullet-Time Gen - MapAnything: Universal Feed-Forward Metric 3D Recon ๐งต๐
2
16
175
The #ICCV2025 main conference open access proceedings is up: https://t.co/hoqMwLPQZ1 Workshop papers will be posted shortly. Aloha!
0
10
52
Join us at our @ICCVConference 2025 Workshop SceneComp โ Generative Scene Completion for Immersive Worlds ๐ โ we have an exciting programme for you! ๐
Monday, 20 October 2025 (am) ๐
scenecomp.github.io
Generative Scene Completion for Immersive Worlds
๐ข SceneComp @ ICCV 2025 ๐๏ธ ๐ Generative Scene Completion for Immersive Worlds ๐ ๏ธ Reconstruct what you know AND ๐ช Generate what you donโt! ๐ Meet our speakers @angelaqdai, @holynski_, @jampani_varun, @ZGojcic @taiyasaki, Peter Kontschieder https://t.co/LvONYIK3dz
#ICCV2025
0
0
13
@CVPR Thanks for all the interest. The signup deadline has now passed.
0
0
0
"I wish there was a modern Bell Labs" "WTF, why is Meta WASTING $100 billion on AR & VR?"
54
98
4K
This is from the team I joined in June! This is only the start of the immersive scenes vision we have. ๐ If you're interested in working with us, feel free to reach out! ๐ฅฝ๐ Some of us will be at #ICCV2025 as well for a relevant workshop! https://t.co/Ct8zl0adjW ๐๏ธ
scenecomp.github.io
Generative Scene Completion for Immersive Worlds
Turn any room into an immersive world ๐โจ At #MetaConnect, we shared how Hyperscape Capture (Beta) lets you capture physical spaces on Meta Quest in minutes and transform them into photorealistic environments ๐คฏ See it in the Meta Horizon Store ๐ https://t.co/XElaYPJxNj
0
3
43
Introducing: Hyperscape Capture ๐ท Last year we showed the world's highest quality Gaussian Splatting, and the first time GS was viewable in VR. Now, capture your own Hyperscapes, directly from your Quest headset in only 5 minutes of walking around. https://t.co/wlHmtRiANy
Hyperscape: The future of VR and the Metaverse Excited that Zuckerberg @finkd announced what I have been working on at Connect. Hyperscape enables people to create high fidelity replicas of physical spaces, and embody them in VR. Check out the demo app: https://t.co/TcRRUfymoc
41
284
2K
Interested in reviewing for @CVPR 2026? We are looking for anyone with prior publication experience at venues like CVPR that is interested and may not be on our list of reviewers yet. If that is you, please reach out!
18
8
47
๐ก Introducing LuxDiT: a diffusion transformer (DiT) that estimates realistic scene lighting from a single image or video. It produces accurate HDR environment maps, addressing a long-standing challenge in computer vision. ๐Paper: https://t.co/6cW6WlREBl
3
58
274
Introducing DINOv3: a state-of-the-art computer vision model trained with self-supervised learning (SSL) that produces powerful, high-resolution image features. For the first time, a single frozen vision backbone outperforms specialized solutions on multiple long-standing dense
351
783
5K