Yuwei Fang Profile
Yuwei Fang

@studyfang_

Followers
160
Following
40
Media
1
Statuses
16

Senior Research Scientist @Snap Inc.

Bellevue, WA
Joined May 2016
Don't wanna be here? Send us removal request.
@studyfang_
Yuwei Fang
7 months
RT @Dazitu_616: šŸ“¢MinT: Temporally-Controlled Multi-Event Video GenerationšŸ“¢. TL;DR: We identify a fundamental failu….
0
51
0
@studyfang_
Yuwei Fang
1 year
Thanks @_akhaliq for sharing our work!. Excited to share our latest work VIMI for grounded video generation!. This is a great collaboration with @WilliMenapace @siarohin9013 @tsaishien_chen @kcjacksonwang @isskoro @gneubig @SergeyTulyakov !. Project page:
@_akhaliq
AK
1 year
Snap presents VIMI. Grounding Video Generation through Multi-modal Instruction. Existing text-to-video diffusion models rely solely on text-only encoders for their pretraining. This limitation stems from the absence of large-scale multimodal prompt video datasets, resulting in a
1
5
22
@studyfang_
Yuwei Fang
1 year
We are also excited to share our new work on minute-long video editing VIA: Excited about video generation? Come to have a chat with us.
@SergeyTulyakov
Sergey Tulyakov
1 year
Stop by our @CVPR posters to meet the team!. We present 7 poster today, 2 papers are highlights. Video generation, 3D scene generation, 4D generation, improving quality of synthesized images and more!
Tweet media one
0
1
2
@studyfang_
Yuwei Fang
1 year
Enjoy my boba in a night market šŸ˜„
Tweet media one
@kcjacksonwang
Jackson (Kuan-Chieh) Wang @cvpr
1 year
šŸ’« Happy to share MoA: a *Mixture-of-Attention* architecture for personalization of generative models! . Imagine YOUšŸ«µšŸ» drinking bobašŸ§‹in a night market, or scuba diving🤿, and more! . šŸ“•: šŸ”—: šŸŽ„:
0
0
5
@studyfang_
Yuwei Fang
1 year
Thanks @_akhaliq for sharing our work! . Excited to present LoCoMo for comprehensively evaluating conversational memory with our curated very long-term conversation datasets. Full thread with all dataset/evaluation framework/methods/analysis details at:Ā 
@_akhaliq
AK
1 year
Snap presents Evaluating Very Long-Term Conversational Memory of LLM Agents. Existing works on long-term open-domain dialogues focus on evaluating model responses within contexts spanning no more than five chat sessions. Despite advancements in long-context large language models
Tweet media one
0
5
22
@studyfang_
Yuwei Fang
1 year
RT @adyasha10: Can LLMs keep track of very long conversations?. We evaluate 'conversational memory' of LLMs via 3 tasks on our dataset of m….
0
58
0
@studyfang_
Yuwei Fang
1 year
Plus, we're still looking for summer research interns in 2024. Please send your resume to yfang3@snapchat.com!.
0
0
1
@studyfang_
Yuwei Fang
1 year
Thanks @_akhaliq for sharing our work!. Excited to share our latest creation ā€˜Snap Video’! Dive into our project page for more fun stories we’re creating. Project page: ArXiv:
@_akhaliq
AK
1 year
Snap Video. Scaled Spatiotemporal Transformers for Text-to-Video Synthesis. Contemporary models for generating images show remarkable quality and versatility. Swayed by these advantages, the research community repurposes them to generate videos. Since video content is highly
3
6
22
@studyfang_
Yuwei Fang
2 years
We are hiring research interns at Snap for 2024! Our research topics range from multi-modal LLMs, Efficient DL to image/video/3D generation, personalization and editing. Please feel free to reach us directly or submit your applications here:
0
0
6
@studyfang_
Yuwei Fang
2 years
It will be my first time to attend #ACL2023 in person! So excited! Anyone interested in having a chat? Let’s meet then! . Besides, we are hiring research scientists and interns to work on Multimodal and NLP at Snap Research. If you are interested, let’s talk about it!.
4
1
24