maximelabonne Profile Banner
Maxime Labonne Profile
Maxime Labonne

@maximelabonne

Followers
24K
Following
8K
Media
748
Statuses
3K

Head of Post-Training @LiquidAI_ ๐Ÿ’ป GitHub: https://t.co/ElXDsjz8YP ๐Ÿค— HF: https://t.co/2ECS7GiJGD ๐Ÿ“ Blog: https://t.co/Gz5bhbXWT0

London, England
Joined October 2017
Don't wanna be here? Send us removal request.
@maximelabonne
Maxime Labonne
1 day
0
0
2
@maximelabonne
Maxime Labonne
1 day
LFM2-ColBERT-350M is also very fast! Its inference speed is on par with GTE-ModernColBERT-v1 (only 150M parameters) for query and document encoding across various batch sizes.
1
0
2
@KraneShares
KraneShares
2 months
In a world built for humans, humanoid robots are designed to plug right into existing infrastructure. In a recent webinar, KraneShares CIO @ahern_brendan, Senior Investment Strategist Derek Yan, CFA, and Robostore Founder Teddy Haggerty explored why they believe this simple fact
4
19
67
@maximelabonne
Maxime Labonne
1 day
Even more interestingly, LFM2-ColBERT-350M is an excellent cross-lingual retriever. This means that it is capable of retrieving documents based on queries from other languages.
1
0
3
@maximelabonne
Maxime Labonne
1 day
We extended the NanoBEIR benchmark to include Japanese and Korean languages. We open-sourced this dataset on Hugging Face at LiquidAI/nanobeir-multilingual-extended for reproducibility. On this NanoBEIR benchmark, LFM2-ColBERT-350M displays significantly stronger multilingual
1
0
4
@maximelabonne
Maxime Labonne
1 day
Late interaction retrievers like LFM2-ColBERT-350M are particularly interesting because they preserve much of the expressivity of re-rankers while retaining the efficiency of bi-encoders.
1
2
15
@HealthKickHQ
Weight-Loss Coach
2 days
I went from: - XXL Shirts - 44 waist pants - 325 lbs - ~40% body fat To: - M/L shirts - 34 waist pants - 215 lbs - ~15% body fat In 1 year. You can do the same. Here's how in 5 easy steps๐Ÿ‘‡
3
5
37
@maximelabonne
Maxime Labonne
1 day
๐Ÿ’ LFM2-ColBERT-350M: One Model to Embed Them All Very happy to announce our first embedding model! It's a late interaction retriever with excellent multilingual performance. Available today on @huggingface!
7
12
106
@maximelabonne
Maxime Labonne
7 days
Available today on @huggingface
Tweet card summary image
huggingface.co
0
0
5
@maximelabonne
Maxime Labonne
7 days
LFM2-VL-3B just dropped! It's a bigger version of our VLMs with fast inference and strong performance. Look at how gracefully it distinguishes dogs from ice cream scoops
3
5
61
@maximelabonne
Maxime Labonne
15 days
๐Ÿ“š Efficient Language Specialization for Small Language Models @maxencelsb and @SinoueG have released a preprint about their excellent work on fine-tuning small models in French. It shows a solid post-training pipeline to improve French performance while preserving English
5
17
121
@maximelabonne
Maxime Labonne
16 days
New LFM2 release ๐Ÿฅณ It's a Japanese PII extractor with only 350M parameters. It's extremely fast and on par with GPT-5 (!) in terms of quality. Check it out, it's available today on @huggingface!
13
28
194
@TradovateProp
Tradovate Prop
1 day
Real prop traders need the bestโ€”and now the #1 trading platform for prop is about to get even better. Follow Tradovate Prop for updates.
1
2
9
@adrgrondin
Adrien Grondin
18 days
Apple wasnโ€™t kidding, the iPhone 17 Pro is really built for running LLMs Hereโ€™s LFM2 8B A1B by @LiquidAI_ running on-device with MLX in @LocallyAIApp, the iPhone runs the 8B model with zero struggle Thanks @Prince_Canuma for the port to MLX, it made the MLX Swift port possible
98
211
3K
@mlech26l
Mathias Lechner
17 days
Day 1 of the @LiquidAI_ fine-tuning hackathon in Tokyo this weekend. Jointly organized with @weights_biases and @LambdaAPI
1
7
50
@awnihannun
Awni Hannun
20 days
@adrgrondin @LocallyAIApp @LiquidAI_ That is a very nice model for the iPhone. - MoE = very fast generation - Mostly conv layers = low memory footprint and fast for long context.
2
2
17
@maximelabonne
Maxime Labonne
22 days
@huggingface LFM2-8B-A1B and GGUF quants are now available on Hugging Face! https://t.co/QxHyipGjnp
Tweet card summary image
huggingface.co
0
1
24
@maximelabonne
Maxime Labonne
22 days
@huggingface We also have a nice preference optimization combining length-normalized DPO and APO. ๐Ÿ‘€
1
0
10
@yueqi_song
Yueqi Song
6 hours
We just built and released the largest dataset for supervised fine-tuning of agentic LMs, 1.27M trajectories (~36B tokens)! Up until now, large-scale SFT for agents is rare - not for lack of data, but because of fragmentation across heterogeneous formats, tools, and interfaces.
Tweet card summary image
arxiv.org
Public research results on large-scale supervised finetuning of AI agents remain relatively rare, since the collection of agent training data presents unique challenges. In this work, we argue...
7
43
257
@maximelabonne
Maxime Labonne
22 days
@huggingface Compared to the 2.6B, this one has a big boost in knowledge capacity but also code!
1
0
8
@maximelabonne
Maxime Labonne
22 days
@huggingface We have results for 16 different benchmarks This new model outperforms our LFM2-2.6B and similar-sized models. It's really good at math but also creative writing.
1
0
12