Ammar Khairi Profile
Ammar Khairi

@ammar__khairi

Followers
24
Following
13
Media
6
Statuses
19

New account. Research Scholar @Cohere_Labs

London
Joined June 2025
Don't wanna be here? Send us removal request.
@ammar__khairi
Ammar Khairi
7 days
RT @_akhaliq: discuss with author:
0
2
0
@ammar__khairi
Ammar Khairi
7 days
RT @_akhaliq: When Life Gives You Samples. The Benefits of Scaling up Inference Compute for Multilingual LLMs
Tweet media one
0
28
0
@ammar__khairi
Ammar Khairi
7 days
You can find out more about LLMonade ℒ️ here:.
@ammar__khairi
Ammar Khairi
7 days
πŸš€ Want better LLM performance without extra training or special reward models?.Happy to share my work with @Cohere_labs : "When Life Gives You Samples: Benefits of Scaling Inference Compute for Multilingual LLMs".πŸ‘€How we squeeze more from less at inference πŸ‹, details in 🧡
Tweet media one
0
0
2
@ammar__khairi
Ammar Khairi
7 days
Thanks @_akhaliq for putting our work in the spotlight !. Such a special feeling to have my first work shared by legends in the field !.
@_akhaliq
AK
7 days
When Life Gives You Samples. The Benefits of Scaling up Inference Compute for Multilingual LLMs
Tweet media one
1
1
14
@ammar__khairi
Ammar Khairi
7 days
RT @weiyinko_ml: Wow wasn't expecting this! Thanks so much for the kind message @Cohere_Labs! Big shoutout to @mrdanieldsouza and @sarahook….
0
3
0
@ammar__khairi
Ammar Khairi
7 days
RT @Cohere_Labs: Can we improve the performance of LLMs during inference without the need for extensive sampling OR special reward models?….
0
9
0
@ammar__khairi
Ammar Khairi
7 days
RT @mrdanieldsouza: 🚨New Recipe just dropped! 🚨 . "LLMonade πŸ‹" ➑️ squeeze max performance from your multilingual LLMs at inference time !πŸ‘€πŸ”₯….
0
6
0
@ammar__khairi
Ammar Khairi
7 days
πŸ’ͺ🏼Huge thanks to my incredible mentors: Julia Kreutzer @mrdanieldsouza, @YeS855811, @sarahookr for guiding me and supporting this work ✨.Find our arXiv release here! πŸ“œ:
0
5
13
@ammar__khairi
Ammar Khairi
7 days
TLDR; We propose πŸ‹ LLMonade inference-time recipe: smart sampling βž• carefully designed selection ➑️ performance boost with fewer samples. We show that thoughtful sampling design paired with generalist LLM judges outperforms traditional methods across diverse multilingual tasks.
1
0
7
@ammar__khairi
Ammar Khairi
7 days
But that's not all! When tested on larger models (100B+), our methods show even stronger advantages.πŸš€.We achieve up to 9% gains vs single sample, while the best open-source reward model (RewardBench leader) only reaches 4.5%!
Tweet media one
1
0
8
@ammar__khairi
Ammar Khairi
7 days
We introduce two new selection techniques: CHOPsπŸ₯’ and X-MBR βš–οΈ, designed to amplify multilingual performance gains. Testing on 8B models (Aya-expanse, Qwen3), our methods achieve up to +12% win-rate vs greedy decoding on mArena-Hard from just 5 samples!
Tweet media one
1
1
9
@ammar__khairi
Ammar Khairi
7 days
πŸ¦” We first introduce "Hedged Sampling" - mixing deterministic and stochastic sampling methods. Across models, this achieves +8.1% win-rate on multilingual tasks (+7.2% English) vs the best single sample baselines! πŸ“ˆ
Tweet media one
1
0
9
@ammar__khairi
Ammar Khairi
7 days
Scaling up inference compute often boosts performance, but we ask can we get these gains:. With just 5 samples?! βœ….Across Languages and Tasks? βœ….Without Specialized Reward Models ? βœ…
Tweet media one
1
0
8
@ammar__khairi
Ammar Khairi
7 days
We propose LLMonade πŸ‹, an inference scaling recipe that will help you squeeze out the most performance from your LLMs in two steps:.Sampling πŸ‹ πŸ‹ πŸ‹ πŸ‹ πŸ‹.Selection πŸ‹.LLMonade gets you the best sample for your budget across languages and tasks!
Tweet media one
1
0
8
@ammar__khairi
Ammar Khairi
7 days
πŸš€ Want better LLM performance without extra training or special reward models?.Happy to share my work with @Cohere_labs : "When Life Gives You Samples: Benefits of Scaling Inference Compute for Multilingual LLMs".πŸ‘€How we squeeze more from less at inference πŸ‹, details in 🧡
Tweet media one
2
20
32
@ammar__khairi
Ammar Khairi
8 days
RT @cohere: We’re excited to be a founding participant in the @StanfordDDL Industry-Wide Forum on AI agents alongside @Meta, @Oracle, and @….
0
3
0
@ammar__khairi
Ammar Khairi
8 days
RT @mrdanieldsouza: 🚨 Wait, adding simple markers πŸ“Œduring training unlocks outsized gains at inference time?! πŸ€” 🚨. Thrilled to share our la….
0
17
0
@ammar__khairi
Ammar Khairi
8 days
RT @dianaabagyan: 🚨New pretraining paper on multilingual tokenizers 🚨. Super excited to share my work with @Cohere_Labs: One Tokenizer To R….
0
33
0
@ammar__khairi
Ammar Khairi
8 days
RT @Cohere_Labs: This July join the Cohere Labs Open Science Community for ML Summer School. πŸ“š . This series is organized and hosted by….
0
21
0