agentic learning ai lab
@agentic_ai_lab
Followers
558
Following
19
Media
4
Statuses
18
AI research lab at New York University led by Mengye Ren @mengyer Follow us on @agentic-ai-lab.bsky.social
Joined November 2024
Excited to present my work at CoLLAs 2025 @CoLLAs_Conf! In our paper https://t.co/mm8cmxtvhO, we tackle the challenge of self-supervised learning from scratch with continuous, unlabeled egocentric video streams, where we propose to use temporal segmentation and a two-tier memory.
1
5
31
CDS Asst. Prof. Mengye Ren (@mengyer), Courant PhD students @alexandernwang and Christopher Hoang, and @ylecun introduce PooDLe: a self-supervised learning method enhancing AI vision in real-world videos by improving small object detection. https://t.co/ey9UbNUFII
nyudatascience.medium.com
PooDLe enhances AI vision in real-world videos by improving small object detection.
1
17
68
Check out our latest paper on representation learning from naturalistic videos →
How can we leverage naturalistic videos for visual SSL? Naturalistic, i.e. uncurated, videos are abundant and can emulate the egocentric perspective. Our paper at ICLR 2025, PooDLe🐩, proposes a new SSL method to address the challenges of learning from naturalistic videos. 🧵
0
1
3
🚀 Why Lifelong Learning Matters 🚀 Modern ML systems struggle in non-stationary environments, while humans adapt seamlessly. How do we bridge this gap? 📖 Read our latest blog on the vision behind #CoLLAs2025 and the future of lifelong learning research: 🔗
1
10
21
New research by CDS MS student Amelia (Hui) Dai, PhD student Ryan Teehan (@rteehas), and Asst. Prof. Mengye Ren (@mengyer) shows that models’ accuracy on current events drops 20% over time—even when given the source articles. Presented at #NeurIPS2024. https://t.co/qAkHtzKLQu
nyudatascience.medium.com
Language models lose accuracy on predicting events over time, even with access to up-to-date information.
0
2
11
Just finished my first in-person NeurIPS journey. It’s great to meet many friends, old ones and new ones. Happy to see that my work is well-received in the poster session!
1
1
69
Thrilled to be back at NYU CDS and continuing my research journey! The MSDS program provides incredible research opportunities that shaped my path. If you’re passionate about data science, this is the place to be!
Alumni Spotlight: CDS Master's grad Ying Wang (@yingwww_) ('23) turned her 3 research projects into publications, then returned to CDS as a PhD studying multimodal learning with Profs @andrewgwils and Prof. @mengyer. "Follow your curiosity!" she tells aspiring data scientists.
1
1
53
For more info, check out the links below! arXiv preprint: https://t.co/hAjO3Q63kn Project webpage and dataset download: https://t.co/Mzq2T95ye1 Work done by @ameliadai_, @rteehas, @mengyer
agenticlearning.ai
Daily Oracle: a continuous evaluation benchmark using automatically generated QA pairs from daily news to assess how the future prediction capabilities of LLMs evolve over time
0
0
1
There are still many interesting open questions: Would there be a limit to in-context learning? Do we need continuous pretraining to keep the model up-to-date?
1
0
0
Same even when feeding the models with the gold article (a.k.a. the actual news article for QA).
1
0
1
Can RAG save us from declining? Only partially. Retrieving relevant news articles can sometimes help, but it does not stop the declining trend.
1
0
1
We find a smooth temporal trend—LLMs' performance degrades over time, with a sharp decline beyond its "knowledge cutoff" period.
1
0
1
Will LLMs ever get out-dated? Can LLMs predict the future? Today, we release Daily Oracle, a daily news QA benchmark testing LLM’s temporal generalization and forecasting capability. 🧵
1
3
12
Humans and animals learn visual knowledge through continuous streams of experiences. How do we perform unsupervised continual learning (UCL) in the wild? Yipeng's latest paper reveals three essential components for UCL success in real-world scenarios: Plasticity, Stability, and
What should you care about when continually improving your model’s representations with self-supervised learning? Check out our paper titled *Integrating Present and Past in Unsupervised Continual Learning* to appear at #CLVision2024 and #CoLLAs2024! https://t.co/2YReJALpcH 1/🧵
1
5
27
🔍 New LLM Research 🔍 Conventional wisdom says that deep neural networks suffer from catastrophic forgetting as we train them on a sequence of data points with distribution shifts. But conventions are meant to be challenged! In our recent paper led by @YanlaiYang, we discovered
3
40
218
Wondering about how to train deep neural networks without backprop? Check out our ICLR 2023 paper: https://t.co/8MgcRVbRkO Forward gradient computes gradient information from forward pass. But it is slow and noisy — it computes the directional gradient along a random weight
arxiv.org
Forward gradient learning computes a noisy directional gradient and is a biologically plausible alternative to backprop for learning deep neural networks. However, the standard forward gradient...
30
211
926
🚨 New Research Alert! People have found safety training of LLMs can be easily undone through finetuning. How can we ensure safety in customized LLM finetuning while making finetuning still useful? Check out our latest work led by Jiachen Zhao! @jcz12856876 🔍 Our study reveals:
1
12
66