Andre Martins
@andre_t_martins
Followers
2K
Following
1K
Media
79
Statuses
674
NLP/ML researcher in Lisbon (@ https://t.co/nau7DrwFWR)
Lisbon, Portugal
Joined June 2015
Incredibly honored to serve as #EMNLP 2026 Program Chair along with @sunipa_dev and @HungyiLee2, and General Chair @andre_t_martins. Looking forward to Budapest!! (With thanks to @ChuyuanLi who took this photo in Suzhou!)
🌉 #EMNLP2026 will be October 24-29th in Budapest!🌉 Thanks all for a great conference, and see you at the next one!
8
10
98
🔥New model alert from sardines @Guilherme_PT1 @psanfernandes @SonalSannigrahi @ManosZaranis @Saul_Santos1997 @andre_t_martins!! 🔥 TowerVision for image and video understanding with advanced multilingual translation capabilities! Check it out: https://t.co/mIIvRtMxJ5 🚀
guilhermeviveiros.github.io
TowerVision is a fully open-source multilingual Vision-Language Model designed to bridge multilingual and multicultural gaps in visual understanding tasks across 20 languages.
0
1
5
Check out TowerVision, a multilingual multicultural VLM powered by the @sardine_lab_it. Great work led by @Guilherme_PT1 and @psanfernandes! Bonus: TowerVideo, kudos to @Saul_Santos1997!
This project is the result of an amazing collaboration between researchers: Authored by: @Guilherme_PT1 @psanfernandes @Saul_Santos1997 @SonalSannigrahi @ManosZaranis @nunonmg @amin_farajian @PierreColombo6 @gneubig @andre_t_martins
0
1
12
I'll be at COLM today presenting M-Prometheus (morning, Poster 40) and Zero-shot Benchmarking (afternoon, poster 9). Come check it out!
Don't miss our lab's presentations today at @COLM_conf!! 🔥 We will have two presentations 1/3
0
3
5
BTW, we are looking for post-docs & PhD students to join our @sardine_lab_it ( https://t.co/afNvdmT9wu)! Reach out if interested!
0
0
1
5) EuroBERT: Scaling Multilingual Encoders for European Languages w/ @N1colAIs @gisship @DuarteMRAlves
@AyoubHammal @UndefBehavior @Fannyjrd_ @ManuelFaysse @peyrardMax @psanfernandes
@RicardoRei7 @PierreColombo6
@tomaarsen - Poster session 5, Thu Oct 9, 11:00 AM – 1:00 PM
1
6
8
4) Multilingual Contextualization of Large Language Models for Document-Level Machine Translation w/ Miguel Moura Ramos @psanfernandes @swetaagrawal20 - Poster session 5, Thu Oct 9, 11:00 AM – 1:00 PM
1
0
1
3) Do LLMs Understand Your Translations? Evaluating Paragraph-level MT with Question Answering w/ @psanfernandes @swetaagrawal20 @ManosZaranis @gneubig - Poster session 5, Thu Oct 9, 11:00 AM – 1:00 PM
1
0
1
2) M-Prometheus: A Suite of Open Multilingual LLM Judges w/ @zmprcp @dongkeun_yoon @psanfernandes @ianwu97 @seungonekim @RicardoRei7 @gneubig - (Poster session 1, Tue Oct 7, 11:00 AM – 1:00 PM)
1
2
7
1) Zero-shot Benchmarking: A Framework for Flexible and Scalable Automatic Evaluation of Language Models w/ @zmprcp @nunonmg @RicardoRei7 - Poster session 2, Tue Oct 7, 4:30 PM – 6:30 PM
1
1
2
I'm heading soon to Montreal for @COLM_conf ! Our lab is presenting the following 5 papers: 🧵
1
6
34
Check out this great work led by @ospanbatyr !
Multimodal models typically need millions of examples from each modality paired with text for training. With SEMI 🌓, we integrate new low-resource modalities into LLMs with as few as 32 samples — including satellite images, galaxies, sensors, and molecules. (1/6)
0
0
2
I'll be at ACL presenting our work, A Context-aware Framework for Translation-mediated Conversations ( https://t.co/3Y3IM2n3HU) in the Machine Translation session, 28 Jul, 14:00-15:30, room 1.85. Come check it out if you're interested in bilingual chat MT!
1
2
7
The sparsemax paper reached 1000 citations now and it keeps to bear fruit. Two recent sparse attention examples: long-context efficiency with Adasplash ( https://t.co/t3K24epba8) and better length generalization with ASEntmax ( https://t.co/wbBfJWZqkA). Check the story below!
Our ICML’16 sparsemax paper ( https://t.co/y5AeIw023P) just reached 500 citations. I’m very proud about this paper, so I thought about telling a bit of the story here.
0
4
54
Attending #ICML2025? Come see our oral presentation on "AdaSplash: Adaptive Sparse Flash Attention" today at 15:30 (Oral 2D Efficent ML) or catch us in the poster session at 16:30 (East Exhibition Hall A-B #E-3305). With Nuno Gonçalves and @MarcosTreviso.
Thrilled to share that our paper AdaSplash: Adaptive Sparse Flash Attention was accepted as a Spotlight at ICML 2025! 🎉 What makes our sparse attention different? ✅ Fully differentiable ✅ Adaptive ✅ Naturally sparse ✅ Exact (generalizes softmax) 📄 https://t.co/O84fJ5tdXJ
1
1
5
Sparse attention isn't just effective... It's also fast! ✅ Our work builds on AdaSplash, an accelerated sparse attention kernel that will be presented as an Oral at ICML 2025 🎤 📦 Code:
github.com
AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention) - deep-spin/adasplash
1
1
2
Attending @aclmeeting in Vienna? Check out our Unit's papers 👇 Peters, Martins https://t.co/HKxv6vHBMv Zaranis et al. https://t.co/o3IRejXZcJ Fucci et al. https://t.co/cJYVrSgULS Gomes et al. https://t.co/VYYJe3AoRh Pombal et al. https://t.co/3lKePHNmwa
0
1
9
The NeurIPS paper checklist corroborates the bureaucratic theory of statistics.
argmin.net
The NeurIPS checklist corroborates the bureaucratic theory of statistics.
3
26
115