max_nlp Profile Banner
Max Bartolo Profile
Max Bartolo

@max_nlp

Followers
3K
Following
3K
Media
65
Statuses
813

Researcher @GoogleDeepMind & co-chair @DynabenchAI @MLCommons. Previously @Cohere, @MetaAI/FAIR & @BloomsburyAI.

Joined November 2016
Don't wanna be here? Send us removal request.
@max_nlp
Max Bartolo
14 days
Super proud of @LisaAlazraki and team (@maximilianmozes @jaa_campos @yichern_tan) on the first of her two @cohere internship papers getting accepted to #EMNLP2025! 👏.
@LisaAlazraki
Lisa Alazraki
14 days
This is accepted to EMNLP Main! Looking forward to presenting it in Suzhou 🎉.
0
0
26
@max_nlp
Max Bartolo
1 month
🚨Life update:🚨.After 3 wonderful years, I’ve decided it’s time for me to move on from Cohere. I'm incredibly grateful to have been trusted with building out Cohere's post-training capabilities -- from our first Command Nightly models that topped the HELM leaderboard, to Command
Tweet media one
Tweet media two
Tweet media three
Tweet media four
18
3
188
@grok
Grok
7 days
Join millions who have switched to Grok.
269
538
4K
@max_nlp
Max Bartolo
1 month
RT @robertarail: I’m building a new team at @GoogleDeepMind to work on Open-Ended Discovery!. We’re looking for strong Research Scientists….
0
261
0
@max_nlp
Max Bartolo
2 months
Some of the real-world challenges of building for representation.
@sarahookr
Sara Hooker
2 months
This is one of my favorite sections in the Aya dataset paper. It is towards the end of the paper, so probably isn't read often. It speaks to how the end breakthrough was completely intertwined with the geo-reality experienced by independent researchers around the world.
Tweet media one
0
0
8
@max_nlp
Max Bartolo
2 months
RT @NeurIPSConf: NeurIPS is pleased to officially endorse EurIPS, an independently-organized meeting taking place in Copenhagen this year,….
0
114
0
@max_nlp
Max Bartolo
2 months
RT @tokshop2025: 🎤 Meet our expert panelists! Join Albert Gu, Alisa Liu, Kris Cao, Sander Land, and Yuval Pinter as they discuss the Future….
0
10
0
@max_nlp
Max Bartolo
2 months
Really enjoyed discussing the state of AI benchmarking alongside Prof Mark Bishop, @IAmTimNguyen, Enzo Blindow & @ecsquendor at @MLStreetTalk's first in-person event in London yesterday. Looking forward to many more!
Tweet media one
1
3
19
@max_nlp
Max Bartolo
2 months
RT @LauraRuis: LLMs can be programmed by backprop 🔎. In our new preprint, we show they can act as fuzzy program interpreters and databases.….
0
55
0
@max_nlp
Max Bartolo
3 months
RT @maximilianmozes: We’re looking for a Research Engineer / Scientist with a focus on Data Analysis and Evaluation to join the post-traini….
Tweet card summary image
jobs.ashbyhq.com
Play a pivotal role in ensuring the quality, reliability, and performance of our large language models (LLMs).
0
19
0
@max_nlp
Max Bartolo
3 months
Looking forward to sharing some of our recent research contributions at @MLStreetTalk's first London AI meetup 🤩.
@MLStreetTalk
Machine Learning Street Talk
3 months
We are running our first physical event in London on 14th July! We have Tim Nguyen @IAmTimNguyen from DeepMind and Max Bartolo @max_nlp from Cohere and Enzo Blindow (VP of Data, Research & Analytics) at @Prolific joining us. Not many seats for the first one.
0
3
21
@max_nlp
Max Bartolo
3 months
RT @MoritzLaurer: Kudos to @cohere for releasing 6 proper research papers in May alone, while publications of other western labs increasing….
0
14
0
@max_nlp
Max Bartolo
3 months
RT @_xjdr: the command-a paper is one of my top 5 papers of the year for sure.
0
18
0
@max_nlp
Max Bartolo
3 months
Another side finding was that in some cases, incoherent preambles also led to improved performance. This has exciting implications for other conditioning token sequences such as reasoning traces.
0
0
2
@max_nlp
Max Bartolo
3 months
Can LLMs be incentivised to generate token sequences (in this case preambles) that condition downstream models to improve performance when judged by reward models? Yes! âś….
@LisaAlazraki
Lisa Alazraki
3 months
Thrilled to share our new preprint on Reinforcement Learning for Reverse Engineering (RLRE) 🚀. We demonstrate that human preferences can be reverse engineered effectively by pipelining LLMs to optimise upstream preambles via reinforcement learning 🧵⬇️
Tweet media one
1
6
19
@max_nlp
Max Bartolo
4 months
Massive congrats team Afri-Aya, really great work! 🤩.
@JawardSesay_
Jaward Sesay
4 months
Huge Win Today 🎉🎉 Our team “Afri-Aya” just won this year’s @cohere Aya Expedition. Our work is focusing on 1) curating and evaluating vision dataset then 2) Finetuning the Aya vision model for underrepresented languages in Africa. I represented my beloved Sierra Leone with Krio
Tweet media one
1
1
18
@max_nlp
Max Bartolo
4 months
RT @Cohere_Labs: Join us to mark the end of Expedition Aya, our six-week global open-build challenge designed to accelerate ML research pro….
0
1
0
@max_nlp
Max Bartolo
4 months
RT @Cohere_Labs: Congrats to our Cohere colleagues for their paper “Improving Reward Models with Synthetic Critiques” being presented at NA….
0
2
0
@max_nlp
Max Bartolo
4 months
Recently overheard @iclr_conf: influence functions for LLMs are useless. Poster #208 disagrees 🤔
Tweet media one
1
3
52
@max_nlp
Max Bartolo
4 months
RT @egrefen: At #ICLR2025? Come and see @LauraRuis present these amazing results on how LLMs exploit data in different ways to learn facts….
0
12
0
@max_nlp
Max Bartolo
4 months
If you want to learn more about how LLMs pick up reasoning abilities from procedural knowledge in pretraining, visit poster #208 in Hall 3 at 3pm today @iclr_conf #ICLR #ICLR25 #ICLR2025.
@LauraRuis
Laura Ruis
4 months
Presenting this today 3-530 at poster #208, come say hi 🙋‍♀️.
0
5
33