Norman Müller
@Normanisation
Followers
1K
Following
3K
Media
3
Statuses
225
AI researcher at Meta for 3D Generative AI, former PhD Student @ TU Munich w/ Matthias Nießner
Zurich
Joined December 2019
📣📣 PhD-Intern for Generative AI 📣📣 We are looking for an intern working on 3D Generative AI for summer 2025! Apply here: https://t.co/uIiR6Tv1gJ + feel free to reach out to @K_S_Schwarz or me via PM/mail if you have any questions!
Check out MultiDiff #CVPR2024! From a single RGB image, MultiDiff enables scene-level novel view synthesis with free camera control. https://t.co/oz7IVyV1dc
https://t.co/oxKUbXJmBQ Great work by @normanisation @K_S_Schwarz @barbara_roessle, L Porzi, S Rota Bulò, P Kontschieder
3
34
161
Interested in 3D Interactive Segmentation? 🚀 Don't miss Andrea's talk on Easy3D today at 1 PM (Kalākaua Ballroom)! The code was just released: 🔗:
github.com
Official implementation of the ICCV25 paper "Easy3D A Simple Yet Effective Method for 3D Interactive Segmentation". - facebookresearch/easy3d
See you later at the #iccv25 Oral Session 6B (Kalākaua Ballroom) at 1PM and poster 356 from 2:30PM! We will present our paper “Easy3D: A Simple Yet Effective Method for 3D Interactive Segmentation” with @Normanisation Project + Code:
0
0
20
Check out or workshop on Generate Scene Completion at ICCV'25. We have an incredible speaker lineup and most certainly the coolest website (credits to @ethanjohnweber and @cursor_ai). 📅Mon, Oct 20 (morning session) 🌐 https://t.co/IBZceahOsr
scenecomp.github.io
Generative Scene Completion for Immersive Worlds
📢 SceneComp @ ICCV 2025 🏝️ 🌎 Generative Scene Completion for Immersive Worlds 🛠️ Reconstruct what you know AND 🪄 Generate what you don’t! 🙌 Meet our speakers @angelaqdai, @holynski_, @jampani_varun, @ZGojcic @taiyasaki, Peter Kontschieder https://t.co/LvONYIK3dz
#ICCV2025
1
1
16
Excited to share our Swiss Army Knife for Feed-forward Geometric Modeling: MapAnything is fast, accurate, robust, and highly versatile! Try it yourself: https://t.co/Khfv936IEw Learn more:
map-anything.github.io
MapAnything is a simple, end-to-end trained transformer model that directly regresses the factored metric 3D geometry of a scene given various types of inputs (images, calibration, poses, or depth)....
Meet MapAnything – a transformer that directly regresses factored metric 3D scene geometry (from images, calibration, poses, or depth) in an end-to-end way. No pipelines, no extra stages. Just 3D geometry & cameras, straight from any type of input, delivering new state-of-the-art
3
6
29
Make sure to step by Sherwin’s poster to learn more about camera control for video models!
📢Excited to be at #ICLR2025 for our paper: VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control Poster: Thu 3-5:30 PM (#134) Website: https://t.co/mQi8WDYLRp Code: https://t.co/DbGq102yY4 Also check out our #CVPR2025 follow-up AC3D: https://t.co/XWu78JWWMm
2
0
14
SOTA 3D Interactive Segmentation We had a lot of fun playing with the possibilities of our real-time 3D segmentation model. Blowing up furniture, rearranging interiors, and of course using the Dune thumpers to turn objects into sand.
Tired of staring at GS reconstructions? Check out our new method for 3D Interactive Segmentation💥 Easy3D: A Simple Yet Effective Method for 3D Interactive Segmentation Project: https://t.co/LiQD8uj8Tj Paper: https://t.co/HToqluFQpp 👇Real-time VR interaction on a GS scene👇
0
1
20
Check out Tobias' great work leveraging generative priors to improve 3D reconstruction quality!
1/3 Introducing FlowR 🌸: Flowing from Sparse to Dense 3D Reconstructions We learn a direct mapping between incorrect renderings and their corresponding ground-truth images, augmenting scene captures with consistent novel, generated views to improve reconstruction quality.
0
5
64
1/3 Introducing FlowR 🌸: Flowing from Sparse to Dense 3D Reconstructions We learn a direct mapping between incorrect renderings and their corresponding ground-truth images, augmenting scene captures with consistent novel, generated views to improve reconstruction quality.
4
41
230
From image(s) to 3D scenes in SECONDS! Bolt3D ⚡️ uses a latent diffusion transformer to generate both image and geometry latents from which we can directly decode 3D Gaussians - no optimization needed.
⚡️ Introducing Bolt3D ⚡️ Bolt3D generates interactive 3D scenes in less than 7 seconds on a single GPU from one or more images. It features a latent diffusion model that *directly* generates 3D Gaussians of seen and unseen regions, without any test time optimization. 🧵👇 (1/9)
1
2
33
Ever wondered how to integrate 3DGS into a pretrained video diffusion model? Check out our new approach, Generative Gaussian Splatting (GGS), that improves 3D-consistency in generated multi-view images. https://t.co/bQTZPegJhV
https://t.co/5ydd1vg0Qz
5
16
75
I'm excited to present "Fillerbuster: Multi-View Scene Completion for Casual Captures"! This is work with my amazing collaborators @Normanisation, @yash2kant, Vasu Agrawal, @MZollhoefer, @akanazawa, @c_richardt during my internship at Meta Reality Labs. https://t.co/rTr8UtA6tb
6
34
140
🚀 Introducing Pippo – our diffusion transformer pre-trained on 3B Human Images and post-trained with 400M high-res studio images! ✨Pippo can generate 1K resolution turnaround video from a single iPhone photo! 🧵👀 Full deep dive thread coming up next!
Meta presents: Pippo : High-Resolution Multi-View Humans from a Single Image Generates 1K resolution, multi-view, studio-quality images from a single photo in a one forward pass
5
36
163
Fillerbuster: Multi-View Scene Completion for Casual Captures @ethanjohnweber, @Normanisation, @yash2kant, Vasu Agrawal, @MZollhoefer, @akanazawa, @c_richardt tl;dr: latent DiT conditioned on known images and poses (raymaps)->recover unknown content https://t.co/c8l263FNjW
1
16
90
Check out Manuel's great paper on 3D scene generation from a single image by joint shape and pose diffusion!
Super happy to present our #NeurIPS paper 𝐂𝐨𝐡𝐞𝐫𝐞𝐧𝐭 𝟑𝐃 𝐒𝐜𝐞𝐧𝐞 𝐃𝐢𝐟𝐟𝐮𝐬𝐢𝐨𝐧 𝐅𝐫𝐨𝐦 𝐚 𝐒𝐢𝐧𝐠𝐥𝐞 𝐑𝐆𝐁 𝐈𝐦𝐚𝐠𝐞 in Vancouver. Come to our poster #2804 on Wednesday 11am - 2pm in East Exhibit Hall A-C and say hi if you want to learn more about 3D Scene
0
1
21
📢Now Hiring: PhD Interns at @Meta! 🔍 We're on the hunt for exceptional candidates to intern with our team and work on cutting-edge #DiffusionModels at Meta - GenAI. 💡 Apply Now:
2
22
152
Article: https://t.co/u93jgiLHkU Project page: https://t.co/wuWaI3SlIs Post: https://t.co/zkbiIlRmOA with @yawarnihal, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder from @RealityLabs and @MattNiessner
Excited to share @Normanisation's DiffRF: Rendering-guided 3D Radiance Field Diffusion #CVPR2023 highlight! 2D diffusion is great, but what about 3D? We show radiance field diffusion with rendering guidance for consistent and editable 3D synthesis. Vid: https://t.co/pST8s89RAo
0
0
5
Very honored to receive the MSDI Best Paper Award 2024 for DiffRF! We show for the first time that diffusion models can effectively synthesize 3D radiance fields, accelerating today's 3D asset generation. Grateful for my amazing collaborators and @MattNiessner as my supervisor!
2
2
31
In L3DG, we introduce a novel 3D-VQ-VAE to encode 3D Gaussian Splats. This enables efficient diffusion training, yielding high-quality synthesis of entire rooms and intricate objects! Amazing work lead by @barbara_roessle !
📢𝐋𝟑𝐃𝐆: 𝐋𝐚𝐭𝐞𝐧𝐭 𝟑𝐃 𝐆𝐚𝐮𝐬𝐬𝐢𝐚𝐧 𝐃𝐢𝐟𝐟𝐮𝐬𝐢𝐨𝐧📢 #SIGGRAPHAsia We propose a generative diffusion model for 3D Gaussians. Key is a learnt latent space which substantially reduces the complexity of the diffusion process, thus facilitating room-scale scene
0
3
56
📢𝐋𝟑𝐃𝐆: 𝐋𝐚𝐭𝐞𝐧𝐭 𝟑𝐃 𝐆𝐚𝐮𝐬𝐬𝐢𝐚𝐧 𝐃𝐢𝐟𝐟𝐮𝐬𝐢𝐨𝐧📢 #SIGGRAPHAsia We propose a generative diffusion model for 3D Gaussians. Key is a learnt latent space which substantially reduces the complexity of the diffusion process, thus facilitating room-scale scene
6
57
317
A major step towards generalizable 3D reconstruction! Is this another proof that by leveraging tricks from large-context LLMs, we can move away from pure optimization once and for all?
Our Long-LRM enables large-scale feed-forward 3DGS reconstruction within 1.3 seconds! 🚀🚀 This is great work done by our intern @chenziwee and collaborators @HaoTan5 @KaiZhang9546 @Sai__Bi @fujun_luan @YicongHong @fuxinli2 More results:
0
0
10
𝗛𝘆𝗽𝗲𝗿𝘀𝗰𝗮𝗽𝗲: 𝗣𝗵𝗼𝘁𝗼𝗿𝗲𝗮𝗹𝗶𝘀𝘁𝗶𝗰 𝗥𝗲𝗽𝗹𝗶𝗰𝗮𝘀 𝗶𝗻 𝗩𝗥 Mark Zuckerberg just unveiled our team's latest project: https://t.co/r7NWaihk9l ✨Explore digital replicas in VR - captured with a phone 👉 Try it on your Quest (US only): https://t.co/NmiZzudr0D
10
47
281