Yash Goyal
@yashgoyal_
Followers
175
Following
101
Media
0
Statuses
63
Looking for job opportunities in Gen AI / VLMs / XAI
Montreal, Canada
Joined June 2012
VQA challenge series won the Mark Everingham prize at #ICCV2025 for stimulating a new strand of vision-and-language research. It's extra special because ICCV25 marks the 10-year anniversary of the VQA paper. When we started, the idea of answering any question about any image
8
10
139
Thank you to the award committee and the broader vision community for the recognition. After all these (21!) years and so many conferences across sub-disciplines in AI, the vision community continues to feel like home. What makes this extra special is that the original VQA
15
12
219
My lab’s contributions at #CVPR2025: -- Organizing @vlms4all workshop (with 2 challenges) https://t.co/6rLIvSZkeY -- 2 main conference papers (1 highlight, 1 poster) https://t.co/k8HXEon6P8 (highlight) https://t.co/97Lji16NYO (poster) -- 4 workshop papers (2 spotlight talks, 2
0
15
66
🚨 Deadline Extension Alert for #VLMs4All! 🚨 We have extended the challenge submission deadline 🛠️ New challenge deadline: Apr 22 Show your stuff in the CulturalVQA and GlobalRG challenges! 👉 https://t.co/hCBQ4ViBf6 Spread the word and keep those submissions coming! 🌍✨
0
6
8
🔔 Reminder & Call for #VLMs4All @ #CVPR2025! Help shape the future of culturally aware & geo-diverse VLMs: ⚔️ Challenges: Deadline: Apr 15 🔗 https://t.co/hCBQ4ViBf6 📄 Papers (4pg): Submit work on benchmarks, methods, metrics! Deadline: Apr 22 🔗 https://t.co/qZuGR2XS7c Join us!
sites.google.com
We invite authors to submit anonymized papers of up to 4 pages that discuss identifying effective evaluation tasks, benchmarks, and metrics to assess cultural awareness and alignment in VLMs; and new...
📢Excited to announce our upcoming workshop - Vision Language Models For All: Building Geo-Diverse and Culturally Aware Vision-Language Models (VLMs-4-All) @CVPR 2025! 🌐 https://t.co/2eqS363p0u
0
5
7
Join us for the Montreal AI Symposium on October 10, featuring inspiring keynote addresses from Peter Henderson and Alison Gopnik. They will share their expertise and insights on the latest in AI research and applications. Register now! https://t.co/0JB1hJwtLC
0
11
26
Registrations for the Montreal AI Symposium 2024 are now open! For this year’s edition, the theme of the symposium will be AI and Governance, featuring keynote talks, contributed talks, posters and a panel. The event will close with a networking cocktail. https://t.co/l79T9dYDFd
0
6
8
The call for contributions for the 7th edition of the IA Montreal Symposium, taking place on October 10, 2024, is still open! Please remember that you have until August 22 to submit your proposal. Don’t miss this opportunity! Full details here https://t.co/XYFoX7GzCI
0
9
28
The call for papers for the 7th edition of the IA Montreal Symposium is now open! Accepted contributions will be presented at the event, either as a contributed talk or as a poster. You have until August 22 to apply! Full details here https://t.co/ujyzDFnNyn
0
10
22
SAIL Montreal (SAIT AI Lab Montreal from Samsung) will have a strong presence at #icml2023 next week with 8 ICML papers! See https://t.co/JfQaBsNKiL for the list and we'll have 6 research scientists attending who will be happy to chat with you...
0
7
27
🔥 Exciting news! The code and pretrained model weights for our #EACL2023 paper MAPL🍁 are now available on GitHub 🎉 https://t.co/yiVq1jivFg Catch me at the conference next week to learn more about our work or just chat about multimodal vision-language 👁️💬 modeling!
I'm happy to share our work MAPL🍁 has been accepted to #EACL2023 @eaclmeeting (main track) 🎉 Shout-out to my wonderful co-authors @prlz77, @Saba_A96, @aidanematzadeh, @yashgoyal_ and @aagrawalAA! See you in Dubrovnik 👋
0
9
28
@oscmansan will be presenting our recent work --MAPL🍁 -- at EACL 2023 (Oral: May 3rd @ 11:45am CEST; Poster: May 4th @ 11:15am CEST). If you are at EACL, make sure to stop by! Code, pre-trained models and a live demo (!) now available:
huggingface.co
🔥 Exciting news! The code and pretrained model weights for our #EACL2023 paper MAPL🍁 are now available on GitHub 🎉 https://t.co/yiVq1jivFg Catch me at the conference next week to learn more about our work or just chat about multimodal vision-language 👁️💬 modeling!
1
6
14
In this new joint work (Emy Gervais, @FatrasKilian, @Cyanogenoid, @SimonLacosteJ), we show the massive power of averaging the weights of multiple models trained simultaneously! 🤯 Blog: https://t.co/A4NKr4zDIW Website: https://t.co/xBKHxrdxVV Arxiv:
arxiv.org
Ensemble methods combine the predictions of multiple models to improve performance, but they require significantly higher computation costs at inference time. To avoid these costs, multiple neural...
12
49
224
I'm happy to share our work MAPL🍁 has been accepted to #EACL2023 @eaclmeeting (main track) 🎉 Shout-out to my wonderful co-authors @prlz77, @Saba_A96, @aidanematzadeh, @yashgoyal_ and @aagrawalAA! See you in Dubrovnik 👋
I’m excited to share our new work MAPL🍁: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting w/ @prlz77, Saba Ahmadi, @aidanematzadeh, @yashgoyal_ and @aagrawalAA Paper: https://t.co/0xaTepXCxL 🧵👇
4
9
34
SAIL Montreal (SAIT AI Lab Montreal from Samsung) is happy to sponsor #NeurIPS2022 this week! See https://t.co/h372XBuVcv for our 5 NeurIPS papers; check out our booth at Expo Hall #224 and meet our research scientists! Note: we're still hiring!
0
4
11
I’m excited to share our new work MAPL🍁: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting w/ @prlz77, Saba Ahmadi, @aidanematzadeh, @yashgoyal_ and @aagrawalAA Paper: https://t.co/0xaTepXCxL 🧵👇
arxiv.org
Large pre-trained models have proved to be remarkable zero- and (prompt-based) few-shot learners in unimodal vision and language tasks. We propose MAPL, a simple and parameter-efficient method...
2
12
41
Can vision & language models retrieve the correct image from a set given its contextual description (e.g. No bridesmaid visible at all)? We show that models struggle with this kind of contextual reasoning https://t.co/napfqeFGrR
https://t.co/TkIUABttG1
#ACL2022
3
21
118
I will be recruiting graduate students for Fall'22. Interested in working on vision-language research? Apply here https://t.co/jDnXRYlSZf. Deadline Dec 1st, 2021. More details on my research: https://t.co/8sGz4rO2GW.
5
55
243
VQA Challenge 2021 is live! Deadline: May 7. Link: https://t.co/NQDpIoN6bF. Winners to be announced at #CVPR21 VQA Workshop ( https://t.co/fLbZ6mdsuS). Other challenges @ workshop: TextVQA, TextCaps. (1/2)
visualqa.org
CVPR 2021
1
5
19