agentic_ai_lab Profile Banner
agentic learning ai lab Profile
agentic learning ai lab

@agentic_ai_lab

Followers
558
Following
19
Media
4
Statuses
18

AI research lab at New York University led by Mengye Ren @mengyer Follow us on @agentic-ai-lab.bsky.social

Joined November 2024
Don't wanna be here? Send us removal request.
@YanlaiYang
Yanlai Yang
3 months
Excited to present my work at CoLLAs 2025 @CoLLAs_Conf! In our paper https://t.co/mm8cmxtvhO, we tackle the challenge of self-supervised learning from scratch with continuous, unlabeled egocentric video streams, where we propose to use temporal segmentation and a two-tier memory.
1
5
31
@NYUDataScience
NYU Center for Data Science
5 months
CDS Asst. Prof. Mengye Ren (@mengyer), Courant PhD students @alexandernwang and Christopher Hoang, and @ylecun introduce PooDLe: a self-supervised learning method enhancing AI vision in real-world videos by improving small object detection. https://t.co/ey9UbNUFII
Tweet card summary image
nyudatascience.medium.com
PooDLe enhances AI vision in real-world videos by improving small object detection.
1
17
68
@agentic_ai_lab
agentic learning ai lab
7 months
Check out our latest paper on representation learning from naturalistic videos →
@mengyer
Mengye Ren
7 months
How can we leverage naturalistic videos for visual SSL? Naturalistic, i.e. uncurated, videos are abundant and can emulate the egocentric perspective. Our paper at ICLR 2025, PooDLe🐩, proposes a new SSL method to address the challenges of learning from naturalistic videos. 🧵
0
1
3
@CoLLAs_Conf
CoLLAs 2025
9 months
🚀 Why Lifelong Learning Matters 🚀 Modern ML systems struggle in non-stationary environments, while humans adapt seamlessly. How do we bridge this gap? 📖 Read our latest blog on the vision behind #CoLLAs2025 and the future of lifelong learning research: 🔗
1
10
21
@NYUDataScience
NYU Center for Data Science
10 months
New research by CDS MS student Amelia (Hui) Dai, PhD student Ryan Teehan (@rteehas), and Asst. Prof. Mengye Ren (@mengyer) shows that models’ accuracy on current events drops 20% over time—even when given the source articles. Presented at #NeurIPS2024. https://t.co/qAkHtzKLQu
Tweet card summary image
nyudatascience.medium.com
Language models lose accuracy on predicting events over time, even with access to up-to-date information.
0
2
11
@YanlaiYang
Yanlai Yang
11 months
Just finished my first in-person NeurIPS journey. It’s great to meet many friends, old ones and new ones. Happy to see that my work is well-received in the poster session!
1
1
69
@yingwww_
Ying Wang
11 months
Thrilled to be back at NYU CDS and continuing my research journey! The MSDS program provides incredible research opportunities that shaped my path. If you’re passionate about data science, this is the place to be!
@NYUDataScience
NYU Center for Data Science
11 months
Alumni Spotlight: CDS Master's grad Ying Wang (@yingwww_) ('23) turned her 3 research projects into publications, then returned to CDS as a PhD studying multimodal learning with Profs @andrewgwils and Prof. @mengyer. "Follow your curiosity!" she tells aspiring data scientists.
1
1
53
@agentic_ai_lab
agentic learning ai lab
1 year
There are still many interesting open questions: Would there be a limit to in-context learning? Do we need continuous pretraining to keep the model up-to-date?
1
0
0
@agentic_ai_lab
agentic learning ai lab
1 year
Same even when feeding the models with the gold article (a.k.a. the actual news article for QA).
1
0
1
@agentic_ai_lab
agentic learning ai lab
1 year
Can RAG save us from declining? Only partially. Retrieving relevant news articles can sometimes help, but it does not stop the declining trend.
1
0
1
@agentic_ai_lab
agentic learning ai lab
1 year
We find a smooth temporal trend—LLMs' performance degrades over time, with a sharp decline beyond its "knowledge cutoff" period.
1
0
1
@agentic_ai_lab
agentic learning ai lab
1 year
Will LLMs ever get out-dated? Can LLMs predict the future? Today, we release Daily Oracle, a daily news QA benchmark testing LLM’s temporal generalization and forecasting capability. 🧵
1
3
12
@mengyer
Mengye Ren
2 years
Humans and animals learn visual knowledge through continuous streams of experiences. How do we perform unsupervised continual learning (UCL) in the wild? Yipeng's latest paper reveals three essential components for UCL success in real-world scenarios: Plasticity, Stability, and
@yipengzz
Yipeng Zhang
2 years
What should you care about when continually improving your model’s representations with self-supervised learning? Check out our paper titled *Integrating Present and Past in Unsupervised Continual Learning* to appear at #CLVision2024 and #CoLLAs2024! https://t.co/2YReJALpcH 1/🧵
1
5
27
@mengyer
Mengye Ren
2 years
🔍 New LLM Research 🔍 Conventional wisdom says that deep neural networks suffer from catastrophic forgetting as we train them on a sequence of data points with distribution shifts. But conventions are meant to be challenged! In our recent paper led by @YanlaiYang, we discovered
3
40
218
@mengyer
Mengye Ren
3 years
Wondering about how to train deep neural networks without backprop? Check out our ICLR 2023 paper: https://t.co/8MgcRVbRkO Forward gradient computes gradient information from forward pass. But it is slow and noisy — it computes the directional gradient along a random weight
Tweet card summary image
arxiv.org
Forward gradient learning computes a noisy directional gradient and is a biologically plausible alternative to backprop for learning deep neural networks. However, the standard forward gradient...
30
211
926
@mengyer
Mengye Ren
2 years
Introducing LifelongMemory, an LLM-based personalized AI for egocentric video natural language query (NLQ). This amazing work is led by Ying Wang @yingwww_
1
8
59
@mengyer
Mengye Ren
2 years
🚨 New Research Alert! People have found safety training of LLMs can be easily undone through finetuning. How can we ensure safety in customized LLM finetuning while making finetuning still useful? Check out our latest work led by Jiachen Zhao! @jcz12856876 🔍 Our study reveals:
1
12
66