
Max Bartolo
@max_nlp
Followers
3K
Following
3K
Media
65
Statuses
813
Researcher @GoogleDeepMind & co-chair @DynabenchAI @MLCommons. Previously @Cohere, @MetaAI/FAIR & @BloomsburyAI.
Joined November 2016
Super proud of @LisaAlazraki and team (@maximilianmozes @jaa_campos @yichern_tan) on the first of her two @cohere internship papers getting accepted to #EMNLP2025! 👏.
0
0
26
RT @robertarail: I’m building a new team at @GoogleDeepMind to work on Open-Ended Discovery!. We’re looking for strong Research Scientists….
0
261
0
Some of the real-world challenges of building for representation.
This is one of my favorite sections in the Aya dataset paper. It is towards the end of the paper, so probably isn't read often. It speaks to how the end breakthrough was completely intertwined with the geo-reality experienced by independent researchers around the world.
0
0
8
RT @NeurIPSConf: NeurIPS is pleased to officially endorse EurIPS, an independently-organized meeting taking place in Copenhagen this year,….
0
114
0
RT @tokshop2025: 🎤 Meet our expert panelists! Join Albert Gu, Alisa Liu, Kris Cao, Sander Land, and Yuval Pinter as they discuss the Future….
0
10
0
Really enjoyed discussing the state of AI benchmarking alongside Prof Mark Bishop, @IAmTimNguyen, Enzo Blindow & @ecsquendor at @MLStreetTalk's first in-person event in London yesterday. Looking forward to many more!
1
3
19
RT @LauraRuis: LLMs can be programmed by backprop 🔎. In our new preprint, we show they can act as fuzzy program interpreters and databases.….
0
55
0
RT @maximilianmozes: We’re looking for a Research Engineer / Scientist with a focus on Data Analysis and Evaluation to join the post-traini….
jobs.ashbyhq.com
Play a pivotal role in ensuring the quality, reliability, and performance of our large language models (LLMs).
0
19
0
Looking forward to sharing some of our recent research contributions at @MLStreetTalk's first London AI meetup 🤩.
We are running our first physical event in London on 14th July! We have Tim Nguyen @IAmTimNguyen from DeepMind and Max Bartolo @max_nlp from Cohere and Enzo Blindow (VP of Data, Research & Analytics) at @Prolific joining us. Not many seats for the first one.
0
3
21
RT @MoritzLaurer: Kudos to @cohere for releasing 6 proper research papers in May alone, while publications of other western labs increasing….
0
14
0
Can LLMs be incentivised to generate token sequences (in this case preambles) that condition downstream models to improve performance when judged by reward models? Yes! âś….
Thrilled to share our new preprint on Reinforcement Learning for Reverse Engineering (RLRE) 🚀. We demonstrate that human preferences can be reverse engineered effectively by pipelining LLMs to optimise upstream preambles via reinforcement learning 🧵⬇️
1
6
19
Massive congrats team Afri-Aya, really great work! 🤩.
Huge Win Today 🎉🎉 Our team “Afri-Aya” just won this year’s @cohere Aya Expedition. Our work is focusing on 1) curating and evaluating vision dataset then 2) Finetuning the Aya vision model for underrepresented languages in Africa. I represented my beloved Sierra Leone with Krio
1
1
18
RT @Cohere_Labs: Join us to mark the end of Expedition Aya, our six-week global open-build challenge designed to accelerate ML research pro….
0
1
0
RT @Cohere_Labs: Congrats to our Cohere colleagues for their paper “Improving Reward Models with Synthetic Critiques” being presented at NA….
0
2
0
Recently overheard @iclr_conf: influence functions for LLMs are useless. Poster #208 disagrees 🤔
1
3
52
RT @egrefen: At #ICLR2025? Come and see @LauraRuis present these amazing results on how LLMs exploit data in different ways to learn facts….
0
12
0