André Araujo
@andrefaraujo
Followers
305
Following
20
Media
9
Statuses
39
I'm a Research Scientist at Google DeepMind, working on computer vision and machine learning.
São Paulo, Brazil
Joined July 2009
We're hiring! 🚀 Our @GoogleDeepMind team is looking for a Research Engineer to help push the boundaries of Multimodal AI! This position is in Mountain View. We are looking for candidates who recently finished their PhDs, or have equivalent experience.
job-boards.greenhouse.io
0
0
6
🚨 Deadline Extension Instance-Level Recognition and Generation (ILR+G) Workshop at ICCV2025 @ICCVConference 📅 new deadline: June 26, 2025 (23:59 AoE) 📄 paper submission: https://t.co/gTGYhrTc6Z 🌐 ILR+G website: https://t.co/Oy1vGAg5uh
#ICCV2025 #ComputerVision #AI
1
5
10
Excited to share our TUNER paper tomorrow morning at #CVPR2025! Come find us at poster #278 during the morning session. BTW, see TUNER's code at:
github.com
Contribute to DianaPat/TUNER development by creating an account on GitHub.
@andrefaraujo @DianaAldana97 @lvelho Excited to present our highlight paper TUNER tomorrow (June 13) at #CVPR2025! Come find us at poster #278 during the morning session—we’d love to chat and hear your thoughts. Code is available here: https://t.co/ygAZx6I6mQ.
0
0
5
Talk starting at 10:45 AM in room 202 C! #CVPR2025
What’s missing in Multimodal AI? Well… a lot of things! I’ll share some thoughts on this at the #CVPR2025 BEAM workshop on Wednesday ( https://t.co/RDn9r9uy7X). A few things I'll highlight: spatial awareness, effective tool use and fine-grained understanding. Please come by!
0
0
2
What’s missing in Multimodal AI? Well… a lot of things! I’ll share some thoughts on this at the #CVPR2025 BEAM workshop on Wednesday ( https://t.co/RDn9r9uy7X). A few things I'll highlight: spatial awareness, effective tool use and fine-grained understanding. Please come by!
1
0
11
🚨 Call for Papers! 7th Instance-Level Recognition and Generation Workshop (ILR+G) at @ICCVConference 📍 Honolulu, Hawaii 🌺 📅 October 19–20, 2025 🌐 https://t.co/Oy1vGAg5uh in-proceedings deadline: June 7 out-of-proceedings deadline: June 30 #ICCV2025
1
5
10
We'll be presenting TIPS today at #ICLR2025! Please come by poster 318 this morning to discuss with our team, looking forward to it!
0
0
7
Google's global PhD Fellowship program will open for applications this week! (on Apr 10th) This supports PhD students in computer science and related fields, also connecting to a Google mentor. Learn more and apply at: https://t.co/ynVQDf5xLi (deadline: May 15th, 2025)
research.google
0
1
5
Multimodal AI encoders often lack spatial understanding… but not anymore! Our #ICLR2025 TIPS model (Text-Image Pretraining with Spatial awareness) from @GoogleDeepMind can help 💡🚀 Check out our strong & versatile image-text encoder 💪 Paper & code: https://t.co/LCiqV4gaQ0
6
68
328
And here goes the camera-ready version of the TIPS paper: https://t.co/LCiqV4gaQ0 Amazing work from our team at @GoogleDeepMind!
arxiv.org
While image-text representation learning has become very popular in recent years, existing models tend to lack spatial awareness and have limited direct applicability for dense understanding...
0
0
3
Excited to release a super capable family of image-text models from our TIPS #ICLR2025 paper! https://t.co/1scX7H1DIb We have models from ViT-S to -g, with spatial awareness, suitable to many multimodal AI applications. Can’t wait to see what the community will build with them!
github.com
Contribute to google-deepmind/tips development by creating an account on GitHub.
Want some TIPS? Well, then check out “Text-Image Pretraining with Spatial awareness” :) TIPS is a general-purpose image-text encoder, for off-the-shelf dense and image-level prediction. Finally image-text pretraining with spatially-aware representations! https://t.co/LCiqV4gaQ0
1
6
16
Happy to share our #CVPR2025 paper on robust training for sinusoidal neural networks! Our theoretical insights shed light on the capacity of these models, being directly useful to improve their training. https://t.co/9PCQkYJh3Y with @TiagoNovello, @DianaAldana97, @lvelho
1
9
40
Very happy to see learnings from our TIPS method (ICLR'25 accepted https://t.co/IP6JowSDcE) adopted into SigLIP2! A very nice collaboration, great outcome!
arxiv.org
While image-text representation learning has become very popular in recent years, existing models tend to lack spatial awareness and have limited direct applicability for dense understanding...
Introducing SigLIP2: now trained with additional captioning and self-supervised losses! Stronger everywhere: - multilingual - cls. / ret. - localization - ocr - captioning / vqa Try it out, backward compatible! Models: https://t.co/3hOdqcy9QD Paper: https://t.co/Tp4D8Syld8
0
1
11
Great work from our team at @GoogleDeepMind! With @kmaninis, @kfrancischen, @sohamg121, @arjunkarpur, Koert Chen, Ye Xia, Bingyi Cao, @GuangxingHan, Jan Dlabal, Dan Gnanapragasam, Mojtaba Seyedhosseini, @howardzzh
0
2
3
Want some TIPS? Well, then check out “Text-Image Pretraining with Spatial awareness” :) TIPS is a general-purpose image-text encoder, for off-the-shelf dense and image-level prediction. Finally image-text pretraining with spatially-aware representations! https://t.co/LCiqV4gaQ0
4
11
50
Starting soon! Don't miss Cordelia Schmidt's keynote at 9:10am! @CordeliaSchmid #ECCV2024
#ECCV2024 Our Instance-Level Recognition workshop is tomorrow morning (Monday 9am at Amber 5)! Great keynotes (@CordeliaSchmid, @jampani_varun, @g_kordo), accepted papers and invited papers from the main conference. Don't miss it! https://t.co/9ztsdEqaao
0
0
2
#ECCV2024 Our Instance-Level Recognition workshop is tomorrow morning (Monday 9am at Amber 5)! Great keynotes (@CordeliaSchmid, @jampani_varun, @g_kordo), accepted papers and invited papers from the main conference. Don't miss it! https://t.co/9ztsdEqaao
0
1
11
Really happy to share that UDON was accepted into NeurIPS'24! Paper: https://t.co/YxdXHqg1nB Code: https://t.co/SlVJLn7UHB with @YpsilantisNikos, @kfrancischen, Ondra Chum
github.com
Contribute to nikosips/UDON development by creating an account on GitHub.
Excited to release UDON, our latest & greatest universal image embedding! Effective and efficient multi-teacher distillation to improve performance across different fine-grained domains. Code coming soon! https://t.co/trDeeyOXl7 with @YpsilantisNikos, @kfrancischen, Ondřej Chum
1
2
13
And the ILR workshop's accepted papers are announced! We're happy to feature 2 long and 3 short papers on several relevant topics for our workshop: point tracking, image re-ranking, referring segmentation, etc See the accepted papers at
One more week to go for the ILR workshop submission deadline at #ECCV2024! We welcome a broad range of topics in the area of instance-level recognition, with both short or long formats. Submit at:
0
1
6
One more week to go for the ILR workshop submission deadline at #ECCV2024! We welcome a broad range of topics in the area of instance-level recognition, with both short or long formats. Submit at:
openreview.net
Welcome to the OpenReview homepage for ECCV 2024 Workshop ILR
Announcing the #ECCV2024 workshop on Instance-Level Recognition (ILR)! This is the 6th edition in our workshop series, with amazing keynote speakers: @CordeliaSchmid, @jampani_varun and @g_kordo. Call for papers now open! All information on our website: https://t.co/y2jJrvpDAa
0
1
6