Pablo Montalvo Profile
Pablo Montalvo

@m_olbap

Followers
861
Following
327
Media
25
Statuses
125

ML Engineer @HuggingFace. Previously ML R&D @ Rakuten. Computer vision and NLP mixer, ex-physicist. Dice thrower, dreamer, learner. He/him. Usually friendly :)

Paris, France
Joined December 2019
Don't wanna be here? Send us removal request.
@m_olbap
Pablo Montalvo
24 days
Ever wondered how models actually see an image? Been playing with some visualizations of patch extraction, token layouts, how they affect predictions too. Planning a short visual deep dive comparing how different models process images. Would love thoughts before I go on.
1
5
25
@m_olbap
Pablo Montalvo
2 months
RT @art_zucker: A quick update on the future of the `transformers` library!. In order to provide a source of truth for all models, we are w….
0
104
0
@m_olbap
Pablo Montalvo
2 months
RT @LysandreJik: The Transformers library is undergoing it's largest pivot to date 🙌. It now cements its role as the central model definiti….
0
59
0
@m_olbap
Pablo Montalvo
2 months
And if someone wonders what the dip is - it's models we merged a long time ago, and had 0 usage. They are deprecated - still supported but in a different directory than /models.
0
0
1
@m_olbap
Pablo Montalvo
2 months
Researchers, scientists, this also means your research doesn’t live in a vacuum: once a model integrated into Transformers, it stays maintained, benefits from community testing and ablations, and has better chances to be reproducible, adapted, and improved upon 🧪.
1
0
1
@m_olbap
Pablo Montalvo
2 months
Crucially, Transformers is now a supported backend in vLLM, SGLang, and others. We're working closely with all of them so anyone can run these model definitions with their engine of choice.
1
0
1
@m_olbap
Pablo Montalvo
2 months
Hackability and reusability go hand-in-hand with maintainability, thanks to strongly opinionated paradigms in Transformers ⚡️.
1
0
1
@m_olbap
Pablo Montalvo
2 months
Just look at the growth curve 📈: the number of vision-language models (VLMs) has nearly doubled in the past year. Audio and video are picking up speed. Text is a given. Vision-language-action models (VLAs) are around the corner with LeRobot and community efforts.
1
0
1
@m_olbap
Pablo Montalvo
2 months
Slides here: feel free to clone/fork the space directly on the hub!. We’re moving from hype to hyper, from large to larger models. But how do we maintain a library that supports and defines over 300 models — and growing — without breaking?.
1
0
1
@m_olbap
Pablo Montalvo
2 months
Had the pleasure of speaking last week at @PyTorch Day France about PyTorch 🔥, the ML community, @vllm_project, and 🤗 Transformers!. I’ve pushed my slides to the Hub directly — much easier to share with practitioners 📤.
Tweet media one
1
4
21
@m_olbap
Pablo Montalvo
3 months
RT @DAubakirovaa: We've ported Pi0-FAST, the autoregressive VLA, developed by @physical_int in @LeRobotHF!🤖 . Big thanks to @MustafaShuko….
0
9
0
@m_olbap
Pablo Montalvo
5 months
RT @chelseabfinn: Our first open-source release at Pi 🤖. - π₀ and π₀-FAST model weights.- code for model, on-robot inference, & fine-tuning….
0
115
0
@m_olbap
Pablo Montalvo
5 months
RT @physical_int: Many of you asked for code & weights for π₀, we are happy to announce that we are releasing π₀ and pre-trained checkpoin….
0
213
0
@m_olbap
Pablo Montalvo
5 months
RT @DAubakirovaa: Let's goooo! We’ve ported robot foundation models to Hugging Face LeRobot! 🎉 . Meet π0 & π0-FAST, developed by @physical_….
0
9
0
@m_olbap
Pablo Montalvo
5 months
kudos to @DAubakirovaa for the neat writeup.
0
1
8
@m_olbap
Pablo Montalvo
5 months
🤖🤖The first robotics foundational model, Pi0, is released today by @physical_int ! Had a blast working on the LeRobot port from jax to torch with @RemiCadene. Good vibes slicing jax weights and messing with attention masks :D . Check out the awesome blog post below !
9
25
126
@m_olbap
Pablo Montalvo
7 months
RT @osanseviero: Aaaand 🥁 we just shipped PaliGemma 2! . New open vision models.- Your friendly sizes 3B, 10B, 28B.- 3 different resolution….
0
77
0
@m_olbap
Pablo Montalvo
7 months
Very happy that (some) big orgs keep shipping OS models that can, by designed, be transferred easily - between this and getting VLMs smaller and smaller, accessibility of these models goes up, which is a metric I'm after 🤗.(and I love the simplicity of PG2 arch).
0
0
0
@m_olbap
Pablo Montalvo
7 months
congrats to @AndreasPSteiner , @GoogleDeepMind and @GoogleAI for this very impactful open release 🚀 be sure to check the wonderful blog as well by Andreas, @mervenoyann , @pcuenq and @ariG23498 !.
1
0
1
@m_olbap
Pablo Montalvo
7 months
PaliGemma2 is out! 🔥 🔥 And it's already available on transformers 🎉 🎉.With a Gemma2 decoder, it's a family of small (and a 27b, alright) models built for transfer /fine-tuning for all kind of VLM tasks! Super excited to see this out, it's a strict upgrade from PaliGemma!
Tweet media one
1
3
31