realNingYu Profile Banner
Ning Yu Profile
Ning Yu

@realNingYu

Followers
890
Following
15
Media
74
Statuses
111

Lead Research Scientist at Netflix Eyeline Studios. Ex-Salesforce. Ex-NVIDIA. Ex-Adobe. Joint PhD at UMD & MPI. Leading efforts in visual and multimodal GenAI.

Los Angeles, CA
Joined December 2019
Don't wanna be here? Send us removal request.
@realNingYu
Ning Yu
26 days
🔊 #ICCV2025 acceptance: ⚡FlashDepth⚡ estimates accurate and consistent depth for 2K-resolution videos in a real-time (24 FPS) streaming fashion on a single A100 GPU. ✊ Kudos to the teamwork led by our intern @gene_ch0u at @eyelinestudios . 👉 Join us to be the next one
Tweet media one
@eyelinestudios
Eyeline Studios
26 days
The latest research paper from @eyelinestudios, FlashDepth, has been accepted to the International Conference on Computer Vision (#ICCV2025). Our model produces accurate and high-resolution depth maps from streaming videos in real time and is completely built on open-source
1
14
101
@realNingYu
Ning Yu
1 month
Grateful to everyone who stopped by our oral presentation and posters during the whirlwind of #CVPR2025 — we know you had plenty of options!. I'm at the @eyelinestudios Booth (1209) from now–3pm today. Come say hi — I’d love to chat about our research philosophy and how it ties
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
4
51
@realNingYu
Ning Yu
1 month
🌊Go-with-the-Flow🌊, motion controllable video diffusion models, is selected as the Oral presentation at #CVPR2025 (top 0.8%). Come to watch @RyanBurgert's presentation (happening in 5min) and stop by our poster for live demos and Q&A. Our live UI is also online (done by
Tweet media one
@realNingYu
Ning Yu
4 months
The first project I led at Netflix Eyeline Studios is headed to #CVPR2025 with 5,5,4 review scores: 🌊Go-with-the-Flow🌊 warps noise for effortless motion control in video diffusion — no pipeline changes, same compute. Direct camera/object motion, transfer movement between
Tweet media one
0
2
6
@realNingYu
Ning Yu
1 month
Attending #CVPR2025 now in #Nashville — couldn’t wait to share what we’ve been working on at @eyelinestudios , and catch up with research crowd!. ⭐ My slots @ Eyeline booth #1209 with "Go-with-the-Flow" live demos with @pablosalamancal .🗓️ June 13th Fri 2-4pm CDT.🗓️ June 15th
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
2
8
@realNingYu
Ning Yu
2 months
📚New survey out📚: “Survey of Video Diffusion Models: Foundations, Implementations, and Applications”. 85 pages, 25K words, 500 references. The so-far most comprehensive, up-to-date, and fine-grained survey milestone in this fast-evolving field, from core techniques to
Tweet media one
Tweet media two
0
5
13
@realNingYu
Ning Yu
4 months
If this kind of work excites you, we’re hiring @Scanline_VFX @eyelinestudios! Let’s build AI tools together for filmmaking. Research intern (open to remote, not limited to summer): Research scientist (open to remote):
Tweet card summary image
jobs.lever.co
As a Research Scientist, you will develop new technologies to revolutionize live-action content creation and storytelling. You will conduct applied research in computer vision and computer graphics...
@realNingYu
Ning Yu
4 months
The first project I led at Netflix Eyeline Studios is headed to #CVPR2025 with 5,5,4 review scores: 🌊Go-with-the-Flow🌊 warps noise for effortless motion control in video diffusion — no pipeline changes, same compute. Direct camera/object motion, transfer movement between
Tweet media one
0
0
4
@realNingYu
Ning Yu
4 months
The first project I led at Netflix Eyeline Studios is headed to #CVPR2025 with 5,5,4 review scores: 🌊Go-with-the-Flow🌊 warps noise for effortless motion control in video diffusion — no pipeline changes, same compute. Direct camera/object motion, transfer movement between
Tweet media one
@Scanline_VFX
Scanline VFX - Powered by Netflix
4 months
Kudos to the research team at our sister company @eyelinestudios. Their latest research paper, 🌊Go-with-the-Flow 🌊, will be presented at #CVPR2025!. Based on their research, we believe this could allow artists in the future to leverage these new techniques to direct the motion
1
22
115
@realNingYu
Ning Yu
4 months
Excited to share our recent research 💡Lux Post Facto💡 will be presented at #CVPR2025! We’ve developed a new method to relight portrait videos in the wild with cinematic quality and temporal consistency — all in post-production. Big shoutout to @myq_1997 and the amazing team at
Tweet media one
@Scanline_VFX
Scanline VFX - Powered by Netflix
4 months
Congratulations to the research team at our sister company @eyelinestudios on their latest research paper - “Lux Post Facto: Learning Portrait Performance Relighting with Conditional Video Diffusion and a Hybrid Dataset” - which will be presented at #CVPR2025 in Nashville.
0
2
17
@realNingYu
Ning Yu
4 months
Thrilled to present at #CVPR2025 the first reference-based 3D-aware image 🔥Triplane Editing🔥, enabling high-quality human/animal faces editing, full-body virtual try-on, and more. Our method achieves state-of-the-art results over latent, text & image-guided 2D/3D-aware GAN &
0
0
7
@realNingYu
Ning Yu
7 months
🗓️Happening today in 11h! We are presenting 🚀T2Vs Meet VLMs🚀 at #NeurIPS2024 #vancouver West Ballroom A-D #5101, Dec 12th 11am-2pm PST. Open-source data & benchmarks drive AI innovation. We introduce a compressive image+video dataset spanning 10 harmfulness concepts, powered by
Tweet media one
0
3
7
@realNingYu
Ning Yu
7 months
Happening today in 8h! We are presenting 🥷Shadowcast🥷 at #NeurIPS2024 #vancouver East Exhibit Hall A-C #4401, Dec 11th 11am-2pm PST. Shadowcast is the first poisoning attack against #VLMs that generates coherent but mind-bending misinformation, underscoring the necessity for
Tweet media one
@Yuancheng_Xu0
Yuancheng Xu
8 months
Here for #NeurIPS2024!.Excited to present Shadowcast, the first data poisoning attack on multi-modal LLMs. Let’s chat about AI safety, scaling inference-time compute, and the latest trend in video generation. Catch me in conference or DM to connect! More info in the thread.
Tweet media one
0
3
15
@realNingYu
Ning Yu
9 months
Open-source data & benchmarks drive AI innovation. We are thrilled to present 🚀T2Vs Meet VLMs🚀 at #neurips2024 D&B Track. We introduce a compressive image+video dataset spanning 10 harmfulness concepts, powered by advanced image/video diffusion models and vision-language agents
Tweet media one
0
1
9
@realNingYu
Ning Yu
9 months
Thrilled to present 🥷Shadowcast 🥷 at #NeurIPS2024 #vancouver, the first poisoning attack against #VLMs that generates coherent but mind-bending misinformation, underscoring the necessity for responsible and trustworthy #VLM. With @Yuancheng_Xu0, Jiarui Yao, @ManliShu,
Tweet media one
@furongh
Furong Huang
1 year
🚨 Breaking Research Discovery! 🚨.Large Vision Language Models (#VLMs) amaze with coherence but hide risks. 🤯🗡️. 🔍 Meet "Shadowcast"🥷: A stealthy, mind-bending AI data poisoning method. 🕵️‍♂️💻. Project page 🔗: #LLMs #DataSecurity . A 🧵 👇
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
3
14
@realNingYu
Ning Yu
9 months
Thrilled to present 💡DifFRelight💡 at #SIGGRAPHAsia2024, a novel-view facial dynamic relighting framework based on image diffusion editing, dynamic Gaussian Splatting conditioning, integrated lighting control, and our in-house light stage hardware. Kudos to the team: Mingming
Tweet media one
@maleewahaha
LiMa
10 months
We present a high-fidelity, personalized diffusion-based relighting model trained on OLAT data. With the help of GS, we can render photorealistic facial images under any lighting condition, from any view, and at any time.
1
20
56