Yehonathan Litman
@yehonation
Followers
145
Following
500
Media
6
Statuses
69
Ph.D. @CMU_Robotics | @NSF GRFP | Prev @AIatMeta, @RealityLabs
Pittsburgh, PA
Joined June 2023
#ICCV2025 Introducing💡LightSwitch💡- A multi-view material-relighting diffusion pipeline that directly and efficiently relights any number of input images to a target lighting & do 3D asset relighting with gaussian splatting! 🧵
1
32
146
Excited to release our new preprint - we introduce Adaptive Patch Transformers (APT), a method to speed up vision transformers by using multiple different patch sizes within the same image!
10
29
228
🌲Introducing our paper A generalizable light transport 3D embedding for Global Illumination https://t.co/qW04w1NdLU. Just as Transformers learn long-range relationships between words or pixels, our new paper shows they can also learn how light interacts and bounces around a 3D
4
33
126
I’ll be presenting our ICCV poster tomorrow from 2:45-4:45 #293! Come see a fast and effective relighting diffusion method that outperforms SOTA inverse renderers for any # of views 🤩
#ICCV2025 Introducing💡LightSwitch💡- A multi-view material-relighting diffusion pipeline that directly and efficiently relights any number of input images to a target lighting & do 3D asset relighting with gaussian splatting! 🧵
0
0
12
I’m at ICCV! If you’re around and wanna talk about research hmu 🤙
0
0
8
Introducing Kaleido💮 from @AIatMeta — a universal generative neural rendering engine for photorealistic, unified object and scene view synthesis. Kaleido is built on a simple but powerful design philosophy: 3D perception is a form of visual common sense. Following this idea,
4
32
223
Today was my last day interning @RealityLabs, I had an amazing time in Redmond working on generative models for augmented reality with super cool people! I’ll have a 3 day break before coming back into Meta as a visiting researcher lol Onwards to the next adventure 🌟
1
0
15
Introducing: Hyperscape Capture 📷 Last year we showed the world's highest quality Gaussian Splatting, and the first time GS was viewable in VR. Now, capture your own Hyperscapes, directly from your Quest headset in only 5 minutes of walking around. https://t.co/wlHmtRiANy
Hyperscape: The future of VR and the Metaverse Excited that Zuckerberg @finkd announced what I have been working on at Connect. Hyperscape enables people to create high fidelity replicas of physical spaces, and embody them in VR. Check out the demo app: https://t.co/TcRRUfymoc
41
283
2K
SGS-1 is akin to nano banana or runway alpha but for 3D - it targets precise 3D creation and editing via CAD. IMO this is very promising in comparison to concurrent 3D generative creation and editing applications that haven’t seen wide adoption (remember DreamFusion?)
1/ We are excited to announce SGS-1, a SOTA foundation model for physical engineering design. SGS-1 enables the creation of manufacturable CAD geometry for real engineering workflows. This example shows SGS-1 in Fusion360 CAD software creating a bracket for a roller assembly.
2
0
7
💡 Introducing LuxDiT: a diffusion transformer (DiT) that estimates realistic scene lighting from a single image or video. It produces accurate HDR environment maps, addressing a long-standing challenge in computer vision. 🔗Paper: https://t.co/6cW6WlREBl
3
58
275
DINOv3 seems very good at matching objects across environments as well
Introducing DINOv3: a state-of-the-art computer vision model trained with self-supervised learning (SSL) that produces powerful, high-resolution image features. For the first time, a single frozen vision backbone outperforms specialized solutions on multiple long-standing dense
14
72
845
Wow, this makes gradio demos so much easier to build 💯
3
1
13
Introducing DINOv3 🦕🦕🦕 A SotA-enabling vision foundation model, trained with pure self-supervised learning (SSL) at scale. High quality dense features, combining unprecedented semantic and geometric scene understanding. Three reasons why this matters…
12
141
1K
Thanks @_akhaliq for sharing our work 😊 For more details check out our website https://t.co/CTDjP41d6c Code at
github.com
[ICCV 2025] LightSwitch: Multi-view Relighting with Material-guided Diffusion - yehonathanlitman/LightSwitch
0
0
1
#ICCV2025 Introducing💡LightSwitch💡- A multi-view material-relighting diffusion pipeline that directly and efficiently relights any number of input images to a target lighting & do 3D asset relighting with gaussian splatting! 🧵
1
32
146
Check out our website for more results and visualizations! This work was done under my advisors Fernando & @shubhtuls :) 🔗Website: https://t.co/CTDjP41d6c 📄Paper: https://t.co/v66tyAnqjm 💻Code & Weights: https://t.co/y9mpyYZhNn 🤗HuggingFace:
huggingface.co
0
0
9
As a result, LightSwitch can relight hundreds of images of real or synthetic data at up to 2K resolution in minutes or even seconds, beating or matching inverse-rendering and direct baselines that either require 10+ hours per scene or produce inconsistent inaccurate relightings.
1
0
4
This enables 3D asset relighting using a 3DGS optimized on original images by either: (1) adding novel-view synthesized into the input image set before diffusion, or (2) freezing 3DGS positions and optimizing appearance only on relit inputs.
1
0
3
We extend the model to relight an arbitrary number of images and scale with inference-time compute by shuffling latent sets at each denoising step—approximating full relighting attention without attending to all views at once and exhausting memory!
1
0
3
Inferring intrinsic properties provides valuable context for relighting data of diverse material composition, and doing so in a multi-view setting enables high quality consistent relighting.
1
0
4