Yash Kant
@yash2kant
Followers
919
Following
8K
Media
68
Statuses
1K
research @netflix-@eyelinestudios and phd @uoftcompsci // prev @meta @snap @georgiatech
Toronto, Ontario
Joined August 2011
Eyeline is thrilled to be back in Hawkins for the fifth and final season of @Stranger_Things. Watch the new trailer and get a first look at whatโs coming when Volume One arrives November 25 on @netflix.
0
2
2
We have internship openings at @eyelinestudios! If you work on video models / 3dv / human motion modeling consider applying! (link in quote tweet or shoot us an email)
๐ข Happening now! @gene_ch0u, @wenqi_xian, and @realNingYu are presenting FlashDepth: Real-time Streaming Video Depth Estimation at 2K Resolution, at #ICCV2025. [Poster] ๐๏ธ Today (Tue, Oct 21) | ๐ 3-5pm HST | ๐ Hawaii Convention Center ExHall II + Poster 433 ๐ To continue our
0
0
2
I'll be joining the faculty @JohnsHopkins late next year as a tenure-track assistant professor in @JHUCompSci Looking for PhD students to join me tackling fun problems in robot manipulation, learning from human data, understanding+predicting physical interactions, and beyond!
87
112
859
Starting today, @Scanline_VFX and Eyeline Studios officially unite under a single name: Eyeline. Together, weโre continuing our legacy of craft, innovation, and storytelling, turning โWhat if?โ into โWhy not?โ Learn more about this exciting new era for our team:
0
2
4
๐๐ง๐ญ๐ซ๐จ๐๐ฎ๐๐ข๐ง๐ ๐๐๐๐๐๐๐ (๐๐๐๐๐๐๐๐ ๐๐ฌ๐ข๐ ๐๐๐๐) MVP4D turns a single reference image into animatable 360-degree avatars! Project: https://t.co/6oTPEGqI5B Paper: https://t.co/lOSSjjUp8h ๐Code releases Nov 15th
6
27
140
๐ข SceneComp @ ICCV 2025 ๐๏ธ ๐ Generative Scene Completion for Immersive Worlds ๐ ๏ธ Reconstruct what you know AND ๐ช Generate what you donโt! ๐ Meet our speakers @angelaqdai, @holynski_, @jampani_varun, @ZGojcic @taiyasaki, Peter Kontschieder https://t.co/LvONYIK3dz
#ICCV2025
2
17
52
Ray 3 is finally out!!! So many months of hard work and dedication has gone into this launch!! So excited and proud of the @LumaLabsAI team โค๏ธโค๏ธโค๏ธ
This is Ray3. The worldโs first reasoning video model, and the first to generate studio-grade HDR. Now with an all-new Draft Mode for rapid iteration in creative workflows, and state of the art physics and consistency. Available now for free in Dream Machine.
4
3
46
Generate persistent 3D worlds from a single image, bigger and better than ever! Weโre excited to share our latest results and invite you to try out our world generation model in a limited beta preview.
209
527
4K
Video diffusion models struggle beyond training resolution โ artifacts & repetition. ๐ฅCineScale๐ฅ solves this with a novel inference paradigm: โก Dedicated variants for video architectures โก Extends T2I to T2V & I2V & V2V โก 8K images & 4K video, tuning-free/minimal tuning
@eyelinestudios and @NTUsg's latest research paper, CineScale, showcases a new method for creating higher-resolution image and video content with novel adaptations for a variety of visual generative model architectures. Unleash the resolution of text-to-video, image-to-video,
0
14
47
Suno 4.5 is quite impressive. Previously AI music was only ever interesting for the novelty. Now, I wouldn't blink if I heard one of these songs on a playlist. First generation I tried: Prompt: "Pop song about optimizing CUDA kernels for LLM training" https://t.co/p2ehQlpacr
8
9
213
My favorite #SIGGRAPH2025 session today: โEigenanalysis in Computer Graphicsโ by Adam Bargteil & Marc Olano!
1
0
3
I am coming to #SIGGRAPH2025!! If youโre around, letโs meet! โ๏ธ
1
0
6
๐ Meet Qwen-Image โ a 20B MMDiT model for next-gen text-to-image generation. Especially strong at creating stunning graphic posters with native text. Now open-source. ๐ Key Highlights: ๐น SOTA text rendering โ rivals GPT-4o in English, best-in-class for Chinese ๐น In-pixel
188
673
4K
Introducing Modify with Instructions in Dream Machine. Use natural language to direct changes across VFX, advertising, film, and design workflows. Native object removal, swapping, virtual sets, character refinements, and restyle will roll out soon to all subscribers.
80
161
3K
๐ย Introducing Wan2.2: The World's First Open-Source MoE-Architecture Video Generation Model with Cinematic Control! ๐ฅย Key Innovations: ๊ท World's First Open-Source MoE Video Model:ย Our Mixture-of-Experts architecture scales model capacityย without increasing computational
84
311
2K
Very interesting, standard attention causes vanishing gradient due to most prob being very small after some training. LASER tackles this by pushing the attention operation on exponential space. i.e., exp_output = sm(QK^T) exp(V) They dont seem to exaggerate on the performance
6
45
296
Some samples of (positive) feedback from Scouts users this past week! We're continuing to let people in every day. Join the waitlist at yutori .com.
0
6
21