
Max Bartolo
@max_nlp
Followers
3K
Following
3K
Media
63
Statuses
797
I lead the Command modelling team at @Cohere and co-chair the @DynabenchAI @MLCommons working group. Prev @DeepMind, @MetaAI / FAIR & @BloomsburyAI.
Joined November 2016
RT @LauraRuis: LLMs can be programmed by backprop 🔎. In our new preprint, we show they can act as fuzzy program interpreters and databases.….
0
51
0
RT @maximilianmozes: We’re looking for a Research Engineer / Scientist with a focus on Data Analysis and Evaluation to join the post-traini….
0
19
0
Looking forward to sharing some of our recent research contributions at @MLStreetTalk's first London AI meetup 🤩.
We are running our first physical event in London on 14th July! We have Tim Nguyen @IAmTimNguyen from DeepMind and Max Bartolo @max_nlp from Cohere and Enzo Blindow (VP of Data, Research & Analytics) at @Prolific joining us. Not many seats for the first one.
0
3
20
RT @MoritzLaurer: Kudos to @cohere for releasing 6 proper research papers in May alone, while publications of other western labs increasing….
0
13
0
Can LLMs be incentivised to generate token sequences (in this case preambles) that condition downstream models to improve performance when judged by reward models? Yes! ✅.
Thrilled to share our new preprint on Reinforcement Learning for Reverse Engineering (RLRE) 🚀. We demonstrate that human preferences can be reverse engineered effectively by pipelining LLMs to optimise upstream preambles via reinforcement learning 🧵⬇️
1
5
17
Massive congrats team Afri-Aya, really great work! 🤩.
Huge Win Today 🎉🎉 Our team “Afri-Aya” just won this year’s @cohere Aya Expedition. Our work is focusing on 1) curating and evaluating vision dataset then 2) Finetuning the Aya vision model for underrepresented languages in Africa. I represented my beloved Sierra Leone with Krio
1
1
18
RT @Cohere_Labs: Join us to mark the end of Expedition Aya, our six-week global open-build challenge designed to accelerate ML research pro….
0
1
0
RT @Cohere_Labs: Congrats to our Cohere colleagues for their paper “Improving Reward Models with Synthetic Critiques” being presented at NA….
0
2
0
Recently overheard @iclr_conf: influence functions for LLMs are useless. Poster #208 disagrees 🤔
1
3
51
RT @egrefen: At #ICLR2025? Come and see @LauraRuis present these amazing results on how LLMs exploit data in different ways to learn facts….
0
12
0
RT @221eugene: Attending #ICLR2025 and interested in #LLM, #Alignment, or #SelfImprovement?. Then come by and check out our work from .@coh….
0
11
0
Really enjoyed giving this talk. Thanks for hosting and for the great questions! @tomhosking you might recognise this slide 😅.
Another great London Machine Learning Meetup earlier. Many thanks to Max Bartolo (@max_nlp) (researcher at @cohere) for the fascinating talk on 'Building Robust Enterprise-Ready Large Language Models'. And thanks to @ManGroup and @ArcticDB for hosting.
1
0
15
RT @arduinfindeis: How exactly was the initial Chatbot Arena version of Llama 4 Maverick different from the public HuggingFace version?🕵️….
0
6
0
RT @sarahookr: Very proud to introduce Kaleidoscope ✨🌿. 🌍 18 languages (Bengali → Spanish).📚 14 subjects (Humanities → STEM).📸 55% requirin….
0
30
0