
Andrei Lupu
@_andreilupu
Followers
714
Following
3K
Media
41
Statuses
268
DPhil student @FLAIR_Ox and @AIatMeta. Previously @Mila_Quebec and @rllabmcgill Theory of Mind / Coordination / Rainbow Teaming 🌈 Opinions my own.
Joined December 2016
Theory of Mind (ToM) is crucial for next gen LLM Agents, yet current benchmarks suffer from multiple shortcomings. Enter 💽 Decrypto, an interactive benchmark for multi-agent reasoning and ToM in LLMs!. Work done with @TimonWilli & @j_foerst at @AIatMeta & @FLAIR_Ox. 🧵👇
4
29
103
Congratulations! Well deserved! 🎉.
🥳 It’s an honour to have been awarded the Outstanding Paper for Scientific Understanding in RL at RLC for our work, ‘How Should We Meta-Learn RL Algorithms?’. Thank you to the organisers @RL_Conference for putting on a great conference, and congratulations to the other winners!
0
0
0
We discovered alien intelligence in sand, and can now play its dreams in real time with a mouse and keyboard. Congrats to the team! . Now, can it run Doom? 🤔.
Genie 3 feels like a watershed moment for world models 🌐: we can now generate multi-minute, real-time interactive simulations of any imaginable world. This could be the key missing piece for embodied AGI… and it can also create beautiful beaches with my dog, playable real time
1
1
22
Here is The Decrypto Benchmark for Multi-Agent Reasoning and Theory of Mind, which we announced a little over a month ago!. 🔗 .📜 .💽
Theory of Mind (ToM) is crucial for next gen LLM Agents, yet current benchmarks suffer from multiple shortcomings. Enter 💽 Decrypto, an interactive benchmark for multi-agent reasoning and ToM in LLMs!. Work done with @TimonWilli & @j_foerst at @AIatMeta & @FLAIR_Ox. 🧵👇
1
0
4
Games isolate key aspects of intelligence and make for fantastic evergreen benchmarks. Thrilled to see them come back in style!. And if you're excited about LLM Theory of Mind, how about a game of Decrypto with your favourite LLM? . 👀👇
We have a long history of using games to measure progress in AI. 🎮. That’s why we’re helping unveil the @Kaggle Game Arena: an open-source platform where models go head-to-head in complex games to help us gauge their capabilities. 🧵
1
1
5
RT @AlexDGoldie: 1/ 🕵️ Algorithm discovery could lead to huge AI breakthroughs! But what is the best way to learn or discover new algorithm….
0
41
0
RT @uljadb99: Unlock real diversity in your LLM! 🚀. LLM outputs can be boring and repetitive. Today, we release Intent Factored Generation….
0
9
0
RT @MartinJosifoski: Scaling AI research agents is key to tackling some of the toughest challenges in the field. But what's required to sca….
0
33
0
RT @yorambac: AI Research Agents are becoming proficient at machine learning tasks, but how can we help them search the space of candidate….
0
68
0
Biology is computable, and evolution is exploitable! 🧬. @SebastianTower6 and @OlaKalisz8 have taken opponent shaping out of the petri dish of MARL environments and applied it to the super crucial problem of Antibody design. 🧫. Check out their work below!.
Antiviral therapy design is myopic 🦠🙈 optimised only for the current strain. That's why you need a different Flu vaccine every year!. Our #ICML2025 paper ADIOS proposes "shaper therapies" that steer viral evolution in our favour & remain effective. Work done @FLAIR_Ox. 🧵👇
0
1
7
RT @MinqiJiang: Recently, there has been a lot of talk of LLM agents automating ML research itself. If Llama 5 can create Llama 6, then sur….
0
195
0
Most AI labs don't try to build AI for normal people. They try to build the AI that will build AI for normal people (and for everything else). Which isn't to say that memory isn't important.
seems big AI labs are hyperfixating on reasoning when they should focus on *memory* instead. normal people won't use models that can think for hours to solve hard math problems. people want models that learn over time, remember details, adapt and interact like a person would.
0
0
5
RL truly is here to stay.
Does RL truly expand a model’s reasoning🧠capabilities? Contrary to recent claims, the answer is yes—if you push RL training long enough!. Introducing ProRL 😎, a novel training recipe that scales RL to >2k steps, empowering the world’s leading 1.5B reasoning model💥and offering
0
0
1
RT @OlaKalisz8: Very cool LLM benchmark based on the game - Decrypto. It shows some surprising shortcomings of the current LLM models. But….
0
1
0
RT @_samvelyan: Much-needed multi-agent benchmark for LLMs 👥. Theory of Mind is key as LLMs act in agentic, interactive settings — yet rema….
0
3
0
RT @j_foerst: Multi-agent interactions are the new frontier of AI and the ability to make sense of others (i.e. "theory of mind") is at the….
0
14
0