Quan Meng
@QTDSMQ
Followers
89
Following
212
Media
0
Statuses
75
Joined February 2016
๐ขExcited to share our latest #ICCV2025 work DiffuMatch: learning spectral diffusion priors for robust non-rigid shape matching! Great work led by Emery Pierson, w/ @angelaqdai and Maks Ovsjanikov. ๐งต(1/n)
3
14
63
๐ข๐ข๐ขWe've released the ScanNet++ Novel View Synthesis Benchmark for iPhone data! ๐ฅณ Test your models on RGBD video featuring real-world challenges like exposure changes & motion blur! Download the newest iPhone NVS test split and submit your results! โฌ๏ธ https://t.co/hLnFwifvTL
1
42
191
๐We have PhD openings in my lab at TU Munich! Explore 3D/4D reconstruction & generation, semantic & functional understanding, and more - at the intersection of graphics, vision, and machine learning. ๐ผPhDs are 100% E13 positions ๐Apply: https://t.co/A0KhPKmSbD or via ELLIS!
1
42
307
All six of our submissions were accepted to #NeurIPS2025 ๐๐ฅณ Awesome works about Gaussian Splatting Primitives, Lighting Estimation, Texturing, and much more GenAI :) Great work by @Peter4AI, @YujinChen_cv, @ZheningHuang, @jiapeng_tang, @nicolasvluetzow, @jnthnschmdt ๐ฅ๐ฅ๐ฅ
7
25
252
Can we use video diffusion to generate 3D scenes? ๐๐จ๐ซ๐ฅ๐๐๐ฑ๐ฉ๐ฅ๐จ๐ซ๐๐ซ (#SIGGRAPHAsia25) creates fully-navigable scenes via autoregressive video generation. Text input -> 3DGS scene output & interactive rendering! ๐ https://t.co/HBdrmU4Oqq ๐ฝ๏ธ https://t.co/AQr0p4uWBZ
7
74
374
Excited to join @uvadatascience as an Assistant Professor! Deeply grateful to my advisors @angelaqdai, Maks Ovsjanikov, Hongbo Fu, and Chiew-Lan Tai for their unwavering support. ๐ขWe are recruiting PhD students and postdocs to work on #SpatialAI. Flyer below with details!
5
18
136
ScaffoldAvatar: High-Fidelity Gaussian Avatars with Patch Expressions (#SIGGRAPH) We reconstruct ultra-high fidelity photorealistic 3D avatars capable of generating realistic and high-quality animations including freckles and other fine facial details. We operate on patch-based
2
37
181
๐ข LiteReality: Graphics-Ready 3D Scene Reconstruction from RGB-D Scans๐ โจ -> converts RGB-D scans into compact, realistic, and interactive 3D scenes โ featuring high-quality meshes, PBR materials, and articulated objects. ๐ท https://t.co/w8hixxH0m2 ๐ https://t.co/e7gbHJAPMD
5
66
321
Check out our #ICCV2025 work on functional 3d scan editing, learning to optimize, multi-level 3d captioning, interactive mesh editing, audio-driven avatars, & shape matching! Congrats @ElBoudjogh24002, @liuyuehcheng, @chandan__yes, @hcxrli, @shivangi2201, Emery for amazing work!
2
26
122
Seven papers accepted at #ICCV2025! Exciting topics: lots of generative AI using transformers, diffusion, 3DGS, etc. focusing on image synthesis, geometry generation, avatars, and much more - check it out! So proud of everyone involved - let's go๐๐๐ https://t.co/Rd4vDGiG5p
3
30
200
Want to work on cutting-edge #AI? We have several fully-funded ๐๐ก๐ & ๐๐จ๐ฌ๐ญ๐๐จ๐ ๐จ๐ฉ๐๐ง๐ข๐ง๐ ๐ฌ in our Visual Computing & AI Lab in Munich! Apply here: https://t.co/HWQhI69CZ0 Topics have a strong focus on Generative AI, 3DGs, NeRFs, Diffusion, LLMs, etc.
3
43
266
Presenting PrEditor3D at #CVPR2025 ๐ข๐ข If you'd like to learn more about our work and discuss 3D generation/editing, come visit our poster on Friday, June 13th, in ExHall D between 10:30-12:30 (Poster #44). Project Page: https://t.co/PhasNl5EV8
0
10
33
๐ข๐ข Weโll be presenting MeshArt tomorrow morning (Friday 13.06) in the poster session at ExHall D Poster #42 from 10:30-12:30. Come and chat about articulated 3D mesh genereation or any 3D generative stuff! Project page: https://t.co/yHqazRNydx
3
27
182
Join us tomorrow for a chat about 3D Scene Generation with Diffusion Models and more! Stop by our LT3SD poster and say hello! Drop me a DM if youโd like to meet! #CVPR2025 ๐๏ธ Fri 13 Jun 11:30 โ 13:30 ๐ ExHall D Poster #45 Check more details in
How can we generate high-fidelity, complex 3D scenes? @QTDSMQ's LT3SD decomposes 3D scenes into latent tree representations, with diffusion on the latent trees enabling seamless infinite 3D scene synthesis! w/ @craigleili, @MattNiessner
https://t.co/wv9bIhkkYi
1
7
23
๐ขBecomingLit: Relightable Gaussian Avatars with Hybrid Neural Shading We propose a hybrid neural shading scheme for creating intrinsically decomposed 3DGS head avatars, that allow real-time relighting and animation. ๐ https://t.co/OLBbYaTO3F ๐ท https://t.co/8JnjXmlVhA
3
76
385
๐ข๐ข๐๐๐ ๐๐๐๐: ๐๐ฎ๐ฉ๐๐ซ ๐๐ฑ๐๐ข๐ญ๐๐ ๐ญ๐จ ๐๐ง๐ง๐จ๐ฎ๐ง๐๐ ๐๐ฉ๐๐๐ญ๐ข๐๐ฅ ๐๐ ๐ข๐ข Weโre building Spatial Foundation Models โ a new paradigm of generative AI that reasons about space and time! Really stoked about our world-class team โ itโs gonna be mind-boggling!
๐๐๐Announcing our $13M funding round to build the next generation of AI: ๐๐ฉ๐๐ญ๐ข๐๐ฅ ๐
๐จ๐ฎ๐ง๐๐๐ญ๐ข๐จ๐ง ๐๐จ๐๐๐ฅ๐ฌ that can generate entire 3D environments anchored in space & time. ๐๐๐ Interested? Join our world-class team: ๐ https://t.co/U0JNkNwp3s
#GenAI #3DAI
29
71
535
๐ข QuickSplat: Fast 3D Surface Reconstruction via Learned Gaussian Initialization @liuyuehcheng learns 2DGS initialization, densification, and optimization priors from ScanNet++ => fast & accurate reconstruction! Project: https://t.co/mDgQxmhqkF
3
57
239
๐ขScanEdit: Hierarchically-Guided Functional 3D Scan Editing Edit complex, real-world 3D scans with text -- @ElBoudjogh24002 combines LLM reasoning with geometric optimization to produce physically plausible, instruction-aligned scene edits Check it out: https://t.co/oKssfeKQYd
0
39
186
How much 3D do visual foundation models (VFMs) know? Previous work requires 3D data for probing โ expensive to collect! #Feat2GS @CVPR 2025 - our idea is to read out 3D Gaussains from VFMs features, thus probe 3D with novel view synthesis. ๐Page: https://t.co/ArpAbYKn33
4
33
261