Michael Elabd ✈️ NeurIPS
@MichaelElabd
Followers
607
Following
59
Media
6
Statuses
53
Foundational Research @ DeepMind
San Francisco, CA
Joined July 2020
In 2026, Continual learning will shift from research curiosity to a tractable problem. The hard parts we will need to address: Defining the environment: user interactions aren't clean MDPs. They're partially observable, scattered across sessions, noisy, and sparse. Credit
8
17
139
Cursor is down and I am suddenly a 0.1x engineer
0
0
6
New papers this week 🔥 DM me some papers you think I should read next! Multi-Agent RL for LLM collaboration: Really excited to see more multi-agent systems for bootstrapping reasoning during either training or inference time. Multi-Agent Group Relative Policy Optimization
2
2
36
Continual learning research that I enjoyed in #neurips Continual learning architectures: Saw a lot of interesting work on adapting the architecture of LLMs to allow for continuous adaptation. Nested Learning does this through an optimizer framework where each layer has its own
10
46
314
My hot take is that continual learning is as much a product problem as it is an algorithm problem You can't learn from what you can't see. And measuring "did this actually help?" requires long-horizon signals that most model calls never get The hard parts: - Thumbs up ≠ the
Google is taking Continual Learning very seriously, having recently published papers about it (Titans + MIRAS, Nested Learning). This makes me think it's likely that @GoogleDeepMind will be be the first to develop and deploy Continual Learning AI, giving them a major advantage.
0
2
22
Here are some research directions I enjoyed in #neurips (will compile some more soon!) Bootstrapping long‑horizon reasoning: Recent work [1, 2] shows we can train LLMs on short-step problems and curriculum them into much longer chains. By composing simple problems into
5
31
267
I made an unofficial NeurIPS 2025 hiring list: @rronak_, @QuantumArjun, @michaelelabd, stealth, I’m a small investor: RL post-training from live product usage. Research Engineers. @jonsidd, Turing: data for frontier models. Research Engineers, SWEs. @schwarzjn_, ICL & Thomson
19
41
433
RSA has survived 50 years and a 10^8x speedup because of correct assumptions. Its interesting to think what are the correct assumptions for the AI alingment problem. Love the cryptographic robustness framing by @adamfungi
#NeurIPS #NeurIPS2025
0
0
7
will be at neurips this year! pushing the boundaries in RL post-training? dms are open, let's talk
4
0
13
amazing deep dive into Gemini Robotics 1.5, thanks @xiao_ted
📢The next milestone for intelligent general-purpose robots has arrived! Announcing Gemini Robotics 1.5, our flagship system which brings breakthroughs from frontier models to the physical world with two new SOTA generalists: the GR 1.5 VLA and GR 1.5 embodied reasoning model 🧵
0
0
2
Excited to announce the launch of Gemini Robotics 1.5-a step closer to robots that can think and act! Checkout the blogpost: https://t.co/5Jg4KmFIKB
0
0
2
Check out our blog post and sign up for our trusted tester program to get early access!! https://t.co/JJQQcxWeQi
deepmind.google
We’re introducing an efficient, on-device robotics model with general-purpose dexterity and fast task adaptation.
0
0
0
Gemini Robotics On-Device is now live 🎉 Here is the TLDR; - Runs on a single 4090 - Best-in-class generalization - Can adapt to new tasks with <100 examples Excited to see what people build with this!!
1
0
1
1
1
7
1/ Gemini 2.5 is here, and it’s our most intelligent AI model ever. Our first 2.5 model, Gemini 2.5 Pro Experimental is a state-of-the-art thinking model, leading in a wide range of benchmarks – with impressive improvements in enhanced reasoning and coding and now #1 on
304
952
7K
this always blows my mind 🤯🤯 never trained on basketball before, yet this robot just nailed a perfect slam dunk! 🤯 #Gemini's world understanding is truly next level.
0
1
2
we made #Gemini works on robots 👀 excited to see what people build with this!
Our ultimate goal is to develop AI that could work for any robot - no matter its shape or size. This includes bi-arm platforms like ALOHA 2 and Franka 🦾 but also for more complex embodiments such as the Apollo developed by @Apptronik.
0
0
3
We’re kicking off the start of our Gemini 2.0 era with Gemini 2.0 Flash, which outperforms 1.5 Pro on key benchmarks at 2X speed (see chart below). I’m especially excited to see the fast progress on coding, with more to come. Developers can try an experimental version in AI
269
747
7K