yash2kant Profile Banner
Yash Kant Profile
Yash Kant

@yash2kant

Followers
919
Following
8K
Media
68
Statuses
1K

research @netflix-@eyelinestudios and phd @uoftcompsci // prev @meta @snap @georgiatech

Toronto, Ontario
Joined August 2011
Don't wanna be here? Send us removal request.
@eyelinestudios
Eyeline
4 days
Eyeline is thrilled to be back in Hawkins for the fifth and final season of @Stranger_Things. Watch the new trailer and get a first look at whatโ€™s coming when Volume One arrives November 25 on @netflix.
0
2
2
@yash2kant
Yash Kant
17 days
We have internship openings at @eyelinestudios! If you work on video models / 3dv / human motion modeling consider applying! (link in quote tweet or shoot us an email)
@realNingYu
Ning Yu (hiring interns)
17 days
๐Ÿ“ข Happening now! @gene_ch0u, @wenqi_xian, and @realNingYu are presenting FlashDepth: Real-time Streaming Video Depth Estimation at 2K Resolution, at #ICCV2025. [Poster] ๐Ÿ—“๏ธ Today (Tue, Oct 21) | ๐Ÿ•‘ 3-5pm HST | ๐Ÿ“ Hawaii Convention Center ExHall II + Poster 433 ๐Ÿ‘‹ To continue our
0
0
2
@mangahomanga
Homanga Bharadhwaj
18 days
I'll be joining the faculty @JohnsHopkins late next year as a tenure-track assistant professor in @JHUCompSci Looking for PhD students to join me tackling fun problems in robot manipulation, learning from human data, understanding+predicting physical interactions, and beyond!
87
112
859
@eyelinestudios
Eyeline
23 days
Starting today, @Scanline_VFX and Eyeline Studios officially unite under a single name: Eyeline. Together, weโ€™re continuing our legacy of craft, innovation, and storytelling, turning โ€œWhat if?โ€ into โ€œWhy not?โ€ Learn more about this exciting new era for our team:
0
2
4
@taubnerfelix
Felix Taubner
23 days
๐ˆ๐ง๐ญ๐ซ๐จ๐๐ฎ๐œ๐ข๐ง๐  ๐Ÿ†๐Œ๐•๐๐Ÿ’๐ƒ๐Ÿ† (๐’๐ˆ๐†๐†๐‘๐€๐๐‡ ๐€๐ฌ๐ข๐š ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“) MVP4D turns a single reference image into animatable 360-degree avatars! Project: https://t.co/6oTPEGqI5B Paper: https://t.co/lOSSjjUp8h ๐Ÿ””Code releases Nov 15th
6
27
140
@ethanjohnweber
Ethan Weber
1 month
๐Ÿ“ข SceneComp @ ICCV 2025 ๐Ÿ๏ธ ๐ŸŒŽ Generative Scene Completion for Immersive Worlds ๐Ÿ› ๏ธ Reconstruct what you know AND ๐Ÿช„ Generate what you donโ€™t! ๐Ÿ™Œ Meet our speakers @angelaqdai, @holynski_, @jampani_varun, @ZGojcic @taiyasaki, Peter Kontschieder https://t.co/LvONYIK3dz #ICCV2025
2
17
52
@_sam_sinha_
Samarth Sinha
2 months
Ray 3 is finally out!!! So many months of hard work and dedication has gone into this launch!! So excited and proud of the @LumaLabsAI team โค๏ธโค๏ธโค๏ธ
@LumaLabsAI
Luma AI
2 months
This is Ray3. The worldโ€™s first reasoning video model, and the first to generate studio-grade HDR. Now with an all-new Draft Mode for rapid iteration in creative workflows, and state of the art physics and consistency. Available now for free in Dream Machine.
4
3
46
@theworldlabs
World Labs
2 months
Generate persistent 3D worlds from a single image, bigger and better than ever! Weโ€™re excited to share our latest results and invite you to try out our world generation model in a limited beta preview.
209
527
4K
@realNingYu
Ning Yu (hiring interns)
2 months
Video diffusion models struggle beyond training resolution โ†’ artifacts & repetition. ๐ŸŽฅCineScale๐ŸŽฅ solves this with a novel inference paradigm: โšก Dedicated variants for video architectures โšก Extends T2I to T2V & I2V & V2V โšก 8K images & 4K video, tuning-free/minimal tuning
@eyelinestudios
Eyeline
2 months
@eyelinestudios and @NTUsg's latest research paper, CineScale, showcases a new method for creating higher-resolution image and video content with novel adaptations for a variety of visual generative model architectures. Unleash the resolution of text-to-video, image-to-video,
0
14
47
@cHHillee
Horace He
2 months
Suno 4.5 is quite impressive. Previously AI music was only ever interesting for the novelty. Now, I wouldn't blink if I heard one of these songs on a playlist. First generation I tried: Prompt: "Pop song about optimizing CUDA kernels for LLM training" https://t.co/p2ehQlpacr
8
9
213
@yash2kant
Yash Kant
3 months
Link (requires registration):
Tweet card summary image
s2025.conference-schedule.org
0
0
1
@yash2kant
Yash Kant
3 months
My favorite #SIGGRAPH2025 session today: โ€œEigenanalysis in Computer Graphicsโ€ by Adam Bargteil & Marc Olano!
1
0
3
@yash2kant
Yash Kant
3 months
I am coming to #SIGGRAPH2025!! If youโ€™re around, letโ€™s meet! โ˜•๏ธ
1
0
6
@Alibaba_Qwen
Qwen
3 months
๐Ÿš€ Meet Qwen-Image โ€” a 20B MMDiT model for next-gen text-to-image generation. Especially strong at creating stunning graphic posters with native text. Now open-source. ๐Ÿ” Key Highlights: ๐Ÿ”น SOTA text rendering โ€” rivals GPT-4o in English, best-in-class for Chinese ๐Ÿ”น In-pixel
188
673
4K
@LumaLabsAI
Luma AI
3 months
Introducing Modify with Instructions in Dream Machine. Use natural language to direct changes across VFX, advertising, film, and design workflows. Native object removal, swapping, virtual sets, character refinements, and restyle will roll out soon to all subscribers.
80
161
3K
@Alibaba_Wan
Wan
3 months
๐Ÿš€ย Introducing Wan2.2: The World's First Open-Source MoE-Architecture Video Generation Model with Cinematic Control! ๐Ÿ”ฅย Key Innovations: ๊”ท World's First Open-Source MoE Video Model:ย Our Mixture-of-Experts architecture scales model capacityย without increasing computational
84
311
2K
@cloneofsimo
Simo Ryu
11 months
Very interesting, standard attention causes vanishing gradient due to most prob being very small after some training. LASER tackles this by pushing the attention operation on exponential space. i.e., exp_output = sm(QK^T) exp(V) They dont seem to exaggerate on the performance
6
45
296
@deviparikh
Devi Parikh
4 months
Some samples of (positive) feedback from Scouts users this past week! We're continuing to let people in every day. Join the waitlist at yutori .com.
0
6
21