Yehonathan Litman Profile
Yehonathan Litman

@yehonation

Followers
145
Following
500
Media
6
Statuses
69

Ph.D. @CMU_Robotics | @NSF GRFP | Prev @AIatMeta, @RealityLabs

Pittsburgh, PA
Joined June 2023
Don't wanna be here? Send us removal request.
@yehonation
Yehonathan Litman
3 months
#ICCV2025 Introducing💡LightSwitch💡- A multi-view material-relighting diffusion pipeline that directly and efficiently relights any number of input images to a target lighting & do 3D asset relighting with gaussian splatting! 🧵
1
32
146
@rchoudhury997
Rohan Choudhury
24 days
Excited to release our new preprint - we introduce Adaptive Patch Transformers (APT), a method to speed up vision transformers by using multiple different patch sizes within the same image!
10
29
228
@_bingxu
Bing Xu
25 days
🌲Introducing our paper A generalizable light transport 3D embedding for Global Illumination https://t.co/qW04w1NdLU. Just as Transformers learn long-range relationships between words or pixels, our new paper shows they can also learn how light interacts and bounces around a 3D
4
33
126
@yehonation
Yehonathan Litman
25 days
I’ll be presenting our ICCV poster tomorrow from 2:45-4:45 #293! Come see a fast and effective relighting diffusion method that outperforms SOTA inverse renderers for any # of views 🤩
@yehonation
Yehonathan Litman
3 months
#ICCV2025 Introducing💡LightSwitch💡- A multi-view material-relighting diffusion pipeline that directly and efficiently relights any number of input images to a target lighting & do 3D asset relighting with gaussian splatting! 🧵
0
0
12
@yehonation
Yehonathan Litman
28 days
I’m at ICCV! If you’re around and wanna talk about research hmu 🤙
0
0
8
@liu_shikun
Shikun Liu
1 month
Introducing Kaleido💮 from @AIatMeta — a universal generative neural rendering engine for photorealistic, unified object and scene view synthesis. Kaleido is built on a simple but powerful design philosophy: 3D perception is a form of visual common sense. Following this idea,
4
32
223
@yehonation
Yehonathan Litman
2 months
Today was my last day interning @RealityLabs, I had an amazing time in Redmond working on generative models for augmented reality with super cool people! I’ll have a 3 day break before coming back into Meta as a visiting researcher lol Onwards to the next adventure 🌟
1
0
15
@JonathonLuiten
Jonathon Luiten
2 months
Introducing: Hyperscape Capture 📷 Last year we showed the world's highest quality Gaussian Splatting, and the first time GS was viewable in VR. Now, capture your own Hyperscapes, directly from your Quest headset in only 5 minutes of walking around. https://t.co/wlHmtRiANy
@JonathonLuiten
Jonathon Luiten
1 year
Hyperscape: The future of VR and the Metaverse Excited that Zuckerberg @finkd announced what I have been working on at Connect. Hyperscape enables people to create high fidelity replicas of physical spaces, and embody them in VR. Check out the demo app: https://t.co/TcRRUfymoc
41
283
2K
@yehonation
Yehonathan Litman
2 months
SGS-1 is akin to nano banana or runway alpha but for 3D - it targets precise 3D creation and editing via CAD. IMO this is very promising in comparison to concurrent 3D generative creation and editing applications that haven’t seen wide adoption (remember DreamFusion?)
@spectral_hq
Spectral Labs
2 months
1/ We are excited to announce SGS-1, a SOTA foundation model for physical engineering design. SGS-1 enables the creation of manufacturable CAD geometry for real engineering workflows. This example shows SGS-1 in Fusion360 CAD software creating a bracket for a roller assembly.
2
0
7
@RfLiang
Ruofan Liang
2 months
💡 Introducing LuxDiT: a diffusion transformer (DiT) that estimates realistic scene lighting from a single image or video. It produces accurate HDR environment maps, addressing a long-standing challenge in computer vision. 🔗Paper: https://t.co/6cW6WlREBl
3
58
275
@YuXiang_IRVL
Yu Xiang
3 months
DINOv3 seems very good at matching objects across environments as well
@AIatMeta
AI at Meta
3 months
Introducing DINOv3: a state-of-the-art computer vision model trained with self-supervised learning (SSL) that produces powerful, high-resolution image features. For the first time, a single frozen vision backbone outperforms specialized solutions on multiple long-standing dense
14
72
845
@parastooabtahi
Parastoo Abtahi
3 months
In VR, users can experience “magical” interactions, such as moving distant virtual objects with the Go-Go technique. How might we similarly extend people’s abilities in the physical world? 🪄 Excited to share Reality Promises, our #UIST2025 paper, led by the amazing @m0k4r1
4
19
115
@yehonation
Yehonathan Litman
3 months
Wow, this makes gradio demos so much easier to build 💯
@_akhaliq
AK
3 months
Anycoder one shotted a gradio lite + transformers.js app for gemma-3-270m-it
3
1
13
@maxseitzer
Max Seitzer
3 months
Introducing DINOv3 🦕🦕🦕 A SotA-enabling vision foundation model, trained with pure self-supervised learning (SSL) at scale. High quality dense features, combining unprecedented semantic and geometric scene understanding. Three reasons why this matters…
12
141
1K
@yehonation
Yehonathan Litman
3 months
Thanks @_akhaliq for sharing our work 😊 For more details check out our website https://t.co/CTDjP41d6c Code at
Tweet card summary image
github.com
[ICCV 2025] LightSwitch: Multi-view Relighting with Material-guided Diffusion - yehonathanlitman/LightSwitch
@_akhaliq
AK
3 months
LightSwitch Multi-view Relighting with Material-guided Diffusion
0
0
1
@yehonation
Yehonathan Litman
3 months
#ICCV2025 Introducing💡LightSwitch💡- A multi-view material-relighting diffusion pipeline that directly and efficiently relights any number of input images to a target lighting & do 3D asset relighting with gaussian splatting! 🧵
1
32
146
@yehonation
Yehonathan Litman
3 months
Check out our website for more results and visualizations! This work was done under my advisors Fernando & @shubhtuls :) 🔗Website: https://t.co/CTDjP41d6c 📄Paper: https://t.co/v66tyAnqjm 💻Code & Weights: https://t.co/y9mpyYZhNn 🤗HuggingFace:
Tweet card summary image
huggingface.co
0
0
9
@yehonation
Yehonathan Litman
3 months
As a result, LightSwitch can relight hundreds of images of real or synthetic data at up to 2K resolution in minutes or even seconds, beating or matching inverse-rendering and direct baselines that either require 10+ hours per scene or produce inconsistent inaccurate relightings.
1
0
4
@yehonation
Yehonathan Litman
3 months
This enables 3D asset relighting using a 3DGS optimized on original images by either: (1) adding novel-view synthesized into the input image set before diffusion, or (2) freezing 3DGS positions and optimizing appearance only on relit inputs.
1
0
3
@yehonation
Yehonathan Litman
3 months
We extend the model to relight an arbitrary number of images and scale with inference-time compute by shuffling latent sets at each denoising step—approximating full relighting attention without attending to all views at once and exhausting memory!
1
0
3
@yehonation
Yehonathan Litman
3 months
Inferring intrinsic properties provides valuable context for relighting data of diverse material composition, and doing so in a multi-view setting enables high quality consistent relighting.
1
0
4