
Yuwei Fang
@studyfang_
Followers
160
Following
40
Media
1
Statuses
16
Senior Research Scientist @Snap Inc.
Bellevue, WA
Joined May 2016
RT @Dazitu_616: š¢MinT: Temporally-Controlled Multi-Event Video Generationš¢. TL;DR: We identify a fundamental failuā¦.
0
51
0
Thanks @_akhaliq for sharing our work!. Excited to share our latest work VIMI for grounded video generation!. This is a great collaboration with @WilliMenapace @siarohin9013 @tsaishien_chen @kcjacksonwang @isskoro @gneubig @SergeyTulyakov !. Project page:
Snap presents VIMI. Grounding Video Generation through Multi-modal Instruction. Existing text-to-video diffusion models rely solely on text-only encoders for their pretraining. This limitation stems from the absence of large-scale multimodal prompt video datasets, resulting in a
1
5
22
We are also excited to share our new work on minute-long video editing VIA: Excited about video generation? Come to have a chat with us.
Stop by our @CVPR posters to meet the team!. We present 7 poster today, 2 papers are highlights. Video generation, 3D scene generation, 4D generation, improving quality of synthesized images and more!
0
1
2
Thanks @_akhaliq for sharing our work! . Excited to present LoCoMo for comprehensively evaluating conversational memory with our curated very long-term conversation datasets. Full thread with all dataset/evaluation framework/methods/analysis details at:Ā
Snap presents Evaluating Very Long-Term Conversational Memory of LLM Agents. Existing works on long-term open-domain dialogues focus on evaluating model responses within contexts spanning no more than five chat sessions. Despite advancements in long-context large language models
0
5
22
RT @adyasha10: Can LLMs keep track of very long conversations?. We evaluate 'conversational memory' of LLMs via 3 tasks on our dataset of mā¦.
0
58
0
Plus, we're still looking for summer research interns in 2024. Please send your resume to yfang3@snapchat.com!.
0
0
1
Thanks @_akhaliq for sharing our work!. Excited to share our latest creation āSnap Videoā! Dive into our project page for more fun stories weāre creating. Project page: ArXiv:
Snap Video. Scaled Spatiotemporal Transformers for Text-to-Video Synthesis. Contemporary models for generating images show remarkable quality and versatility. Swayed by these advantages, the research community repurposes them to generate videos. Since video content is highly
3
6
22
It will be my first time to attend #ACL2023 in person! So excited! Anyone interested in having a chat? Letās meet then! . Besides, we are hiring research scientists and interns to work on Multimodal and NLP at Snap Research. If you are interested, letās talk about it!.
4
1
24