Yanlai Yang Profile
Yanlai Yang

@YanlaiYang

Followers
109
Following
47
Media
2
Statuses
13

PhD student @nyuniversity @agentic_ai_lab

New York, NY
Joined August 2021
Don't wanna be here? Send us removal request.
@YanlaiYang
Yanlai Yang
3 months
Excited to present my work at CoLLAs 2025 @CoLLAs_Conf! In our paper https://t.co/mm8cmxtvhO, we tackle the challenge of self-supervised learning from scratch with continuous, unlabeled egocentric video streams, where we propose to use temporal segmentation and a two-tier memory.
1
5
31
@YanlaiYang
Yanlai Yang
11 months
Just finished my first in-person NeurIPS journey. It’s great to meet many friends, old ones and new ones. Happy to see that my work is well-received in the poster session!
1
1
69
@YanlaiYang
Yanlai Yang
11 months
I’ll be presenting the poster of this work at #NeurIPS2024 tomorrow from 11-2, at West 5609. Welcome everyone to check it out and happy to chat!
@mengyer
Mengye Ren
2 years
πŸ” New LLM Research πŸ” Conventional wisdom says that deep neural networks suffer from catastrophic forgetting as we train them on a sequence of data points with distribution shifts. But conventions are meant to be challenged! In our recent paper led by @YanlaiYang, we discovered
0
2
11
@yingwww_
Ying Wang
2 years
A gloomy day in New York couldn't dampen the fun with new friends and new research at NYC CV Day πŸ₯³ Excited to share our updated LifelongMemory framework that leverages LLMs for long-form video understanding, which achieves SOTA on EgoSchema! https://t.co/D75IWYlK4R
1
3
22
@mengyer
Mengye Ren
2 years
πŸ” New LLM Research πŸ” Conventional wisdom says that deep neural networks suffer from catastrophic forgetting as we train them on a sequence of data points with distribution shifts. But conventions are meant to be challenged! In our recent paper led by @YanlaiYang, we discovered
3
40
218
@mengyer
Mengye Ren
2 years
Introducing LifelongMemory, an LLM-based personalized AI for egocentric video natural language query (NLQ). This amazing work is led by Ying Wang @yingwww_
1
8
59
@svlevine
Sergey Levine
3 years
How should we pretrain for robotic RL? Turns out the same offline RL methods that learn the skills serve as excellent pretraining. Our latest experiments show that offline RL learns better representations w/ real robots: https://t.co/vUFWTcl5xH https://t.co/9Q9oUdXefT Thread>
1
16
120
@svlevine
Sergey Levine
4 years
Reusable datasets, such as ImageNet, are a driving force in ML. But how can we reuse data in robotics? In his new blog post, Frederik Ebert talks about "bridge data": multi-domain and multi-task datasets that boost generalization of new tasks: https://t.co/JbIbC9X1I1 A thread:
Tweet card summary image
bair.berkeley.edu
The BAIR Blog
1
27
115