Ruofan Liang Profile
Ruofan Liang

@RfLiang

Followers
105
Following
242
Media
9
Statuses
33

Ruofan Liang, PhD Student @UofT

Joined August 2013
Don't wanna be here? Send us removal request.
@RfLiang
Ruofan Liang
2 months
RT @twominutepapers: NVIDIA’s AI watched 150,000 videos… and learned to relight scenes incredibly well! No game engine. No 3D software. And….
0
12
0
@RfLiang
Ruofan Liang
2 months
Following DiffusionRenderer, we made another attempt towards general-purpose relighting ✨. It can address some challenging rendering effects beyond DiffusionRenderer's capability. Please check @Kai__He's thread and webpage for more examples 🚀🚀🚀.
@Kai__He
Kai He
2 months
🚀 Introducing UniRelight, a general-purpose relighting framework powered by video diffusion models. 🌟UniRelight jointly models the distribution of scene intrinsics and illumination, enabling high-quality relighting and intrinsic decomposition from a single image or video.
0
0
12
@grok
Grok
3 days
Join millions who have switched to Grok.
151
181
1K
@RfLiang
Ruofan Liang
2 months
RT @sopharicks: Excited to host @ZGojcic talk next week on @nvidia DiffusionRenderer, a new technique for neural rendering. It approximate….
0
11
0
@RfLiang
Ruofan Liang
2 months
RT @zianwang97: 🚀 We just open-sourced Cosmos DiffusionRenderer!. This major upgrade brings significantly improved video de-lighting and re….
0
122
0
@RfLiang
Ruofan Liang
2 months
Thanks for the invitation from @anagh_malik, DiffusionRenderer will also be presented in the poster session of the Pi3DVI Workshop ( tomorrow afternoon.
@RfLiang
Ruofan Liang
2 months
Excited to announce our code release for DiffusionRenderer! Hope you have fun with it 😎. Catch our oral and poster presentations this Saturday afternoon at #CVPR25, Please feel free to visit!.
0
0
7
@RfLiang
Ruofan Liang
2 months
Excited to announce our code release for DiffusionRenderer! Hope you have fun with it 😎. Catch our oral and poster presentations this Saturday afternoon at #CVPR25, Please feel free to visit!.
@zianwang97
Zian Wang
2 months
🚀 DiffusionRenderer is now open-source!.Check out the code and model: . We will present at #CVPR2025 this Sunday, June 15:.🗣️ Oral Session 6A: 1:00–2:30 PM CDT, Karl Dean Grand Ballroom .🖼️ Poster: 4:00–6:00 PM CDT, ExHall D (Poster #29).
0
1
13
@RfLiang
Ruofan Liang
2 months
RT @JunGao33210520: This year, we have 3 papers in CVPR, discussing the connection between 3D and video models:. GEN3C [Highlight] 3D groun….
0
13
0
@RfLiang
Ruofan Liang
3 months
RT @Dazitu_616: 📢 Introducing DenseDPO: Fine-Grained Temporal Preference Optimization for Video Diffusion Models. Compared to vanilla DPO,….
0
35
0
@RfLiang
Ruofan Liang
4 months
Please check @ZhiHaoLin16 's weather magic🤓.
@ZhiHaoLin16
Chih-Hao Lin
4 months
What if you could control the weather in any video — just like applying a filter?.Meet WeatherWeaver, a video model for controllable synthesis and removal of diverse weather effects — such as 🌧️ rain, ☃️ snow, 🌁 fog, and ☁️ clouds — for any input video.
0
1
3
@RfLiang
Ruofan Liang
6 months
RT @xuanchi13: Thanks @_akhaliq for sharing our GEN3C! GEN3C can easily be applied to creating video/scene from a single image, sparse-view….
0
8
0
@RfLiang
Ruofan Liang
7 months
DiffusionRenderer reformulates the traditional rendering process as a generative process. I am proud to be one of the main contributors to this exciting project!.Hope one day this technique can be applied to game and movie production🤓.
@zianwang97
Zian Wang
7 months
🚀 Introducing DiffusionRenderer, a neural rendering engine powered by video diffusion models. 🎥 Estimates high-quality geometry and materials from videos, synthesizes photorealistic light transport, enables relighting and material editing with realistic shadows and reflections
0
7
32
@RfLiang
Ruofan Liang
1 year
This project was done with an amazing team at @NVIDIAAI @UofT by @RfLiang @ZGojcic @merlin_ND @davidjesusacu @nanditav17 @FidlerSanja @zianwang97 . Find more details on our project page:
0
1
6
@RfLiang
Ruofan Liang
1 year
The application of DiPIR can extend beyond lighting estimation – we show that the diffusion model can also propagate text-condition to material editing (e.g., base color, roughness, and emission), or adjust tone-mapping curves.
1
1
3
@RfLiang
Ruofan Liang
1 year
Benefiting from the physically-based scene representation, we can consistently insert virtual objects across multiple viewpoints.
1
1
2
@RfLiang
Ruofan Liang
1 year
The optimized results can be used to create photorealistic editing, such as animating inserted objects with either a moving background or a moving object.
1
1
2
@RfLiang
Ruofan Liang
1 year
DiPIR uses a differentiable path tracer to simulate light interactions with inserted objects. The diffusion model is personalized based on input image and inserted asset. An adapted SDS provides gradient guidance, which is then backpropagated through the differentiable renderer.
1
1
3
@RfLiang
Ruofan Liang
1 year
🔍 In DiPIR, the diffusion model acts like a human evaluator. It takes the edited image as input and propagates the feedback signal to physically-based scene attributes via differentiable rendering, enabling end-to-end optimization for attributes such as light and tone-mapping.
1
1
4
@RfLiang
Ruofan Liang
1 year
🌟 Seamlessly insert 3D objects into any scene!. 🚀 Introducing our #ECCV2024 work “DiPIR”, a method to recover lighting from a single image, enabling photorealistic virtual object compositing into indoor and outdoor scenes. Project page:
5
32
103
@RfLiang
Ruofan Liang
2 years
Our paper and codes are available here: I will also have an oral presentation and a poster session on Wednesday morning. I'm looking forward to exciting discussions in Paris!.(4/n)
0
0
1
@RfLiang
Ruofan Liang
2 years
We also do the second round of ray-marching along surface-reflected camera rays to synthesize interreflections for glossy surfaces. (3/n)
Tweet media one
1
0
1