Yash Goyal Profile
Yash Goyal

@yashgoyal_

Followers
175
Following
101
Media
0
Statuses
63

Looking for job opportunities in Gen AI / VLMs / XAI

Montreal, Canada
Joined June 2012
Don't wanna be here? Send us removal request.
@ICCVConference
#ICCV2025
27 days
The VQA team receiving the Everingham Prize Award
0
4
13
@DhruvBatra_
Dhruv Batra
28 days
VQA challenge series won the Mark Everingham prize at #ICCV2025 for stimulating a new strand of vision-and-language research. It's extra special because ICCV25 marks the 10-year anniversary of the VQA paper. When we started, the idea of answering any question about any image
8
10
139
@deviparikh
Devi Parikh
28 days
Thank you to the award committee and the broader vision community for the recognition. After all these (21!) years and so many conferences across sub-disciplines in AI, the vision community continues to feel like home. What makes this extra special is that the original VQA
15
12
219
@aagrawalAA
Aishwarya Agrawal
5 months
My lab’s contributions at #CVPR2025: -- Organizing @vlms4all workshop (with 2 challenges) https://t.co/6rLIvSZkeY -- 2 main conference papers (1 highlight, 1 poster) https://t.co/k8HXEon6P8 (highlight) https://t.co/97Lji16NYO (poster) -- 4 workshop papers (2 spotlight talks, 2
0
15
66
@vlms4all
VLMs4All - CVPR 2025 Workshop
7 months
🚨 Deadline Extension Alert for #VLMs4All! 🚨 We have extended the challenge submission deadline 🛠️ New challenge deadline: Apr 22 Show your stuff in the CulturalVQA and GlobalRG challenges! 👉 https://t.co/hCBQ4ViBf6 Spread the word and keep those submissions coming! 🌍✨
0
6
8
@vlms4all
VLMs4All - CVPR 2025 Workshop
7 months
🔔 Reminder & Call for #VLMs4All @ #CVPR2025! Help shape the future of culturally aware & geo-diverse VLMs: ⚔️ Challenges: Deadline: Apr 15 🔗 https://t.co/hCBQ4ViBf6 📄 Papers (4pg): Submit work on benchmarks, methods, metrics! Deadline: Apr 22 🔗 https://t.co/qZuGR2XS7c Join us!
Tweet card summary image
sites.google.com
We invite authors to submit anonymized papers of up to 4 pages that discuss identifying effective evaluation tasks, benchmarks, and metrics to assess cultural awareness and alignment in VLMs; and new...
@vlms4all
VLMs4All - CVPR 2025 Workshop
8 months
📢Excited to announce our upcoming workshop - Vision Language Models For All: Building Geo-Diverse and Culturally Aware Vision-Language Models (VLMs-4-All) @CVPR 2025! 🌐 https://t.co/2eqS363p0u
0
5
7
@Mila_Quebec
Mila - Institut québécois d'IA
1 year
Join us for the Montreal AI Symposium on October 10, featuring inspiring keynote addresses from Peter Henderson and Alison Gopnik. They will share their expertise and insights on the latest in AI research and applications. Register now! https://t.co/0JB1hJwtLC
0
11
26
@Mila_Quebec
Mila - Institut québécois d'IA
1 year
Registrations for the Montreal AI Symposium 2024 are now open! For this year’s edition, the theme of the symposium will be AI and Governance, featuring keynote talks, contributed talks, posters and a panel. The event will close with a networking cocktail. https://t.co/l79T9dYDFd
0
6
8
@Mila_Quebec
Mila - Institut québécois d'IA
1 year
The call for contributions for the 7th edition of the IA Montreal Symposium, taking place on October 10, 2024, is still open! Please remember that you have until August 22 to submit your proposal. Don’t miss this opportunity! Full details here https://t.co/XYFoX7GzCI
0
9
28
@Mila_Quebec
Mila - Institut québécois d'IA
1 year
The call for papers for the 7th edition of the IA Montreal Symposium is now open! Accepted contributions will be presented at the event, either as a contributed talk or as a poster. You have until August 22 to apply! Full details here https://t.co/ujyzDFnNyn
0
10
22
@SimonLacosteJ
Simon Lacoste-Julien
2 years
SAIL Montreal (SAIT AI Lab Montreal from Samsung) will have a strong presence at #icml2023 next week with 8 ICML papers! See https://t.co/JfQaBsNKiL for the list and we'll have 6 research scientists attending who will be happy to chat with you...
0
7
27
@oscmansan
Oscar Mañas @ ICCV
3 years
🔥 Exciting news! The code and pretrained model weights for our #EACL2023 paper MAPL🍁 are now available on GitHub 🎉 https://t.co/yiVq1jivFg Catch me at the conference next week to learn more about our work or just chat about multimodal vision-language 👁️💬 modeling!
@oscmansan
Oscar Mañas @ ICCV
3 years
I'm happy to share our work MAPL🍁 has been accepted to #EACL2023 @eaclmeeting (main track) 🎉 Shout-out to my wonderful co-authors @prlz77, @Saba_A96, @aidanematzadeh, @yashgoyal_ and @aagrawalAA! See you in Dubrovnik 👋
0
9
28
@aagrawalAA
Aishwarya Agrawal
3 years
@oscmansan will be presenting our recent work --MAPL🍁 -- at EACL 2023 (Oral: May 3rd @ 11:45am CEST; Poster: May 4th @ 11:15am CEST). If you are at EACL, make sure to stop by! Code, pre-trained models and a live demo (!) now available:
huggingface.co
@oscmansan
Oscar Mañas @ ICCV
3 years
🔥 Exciting news! The code and pretrained model weights for our #EACL2023 paper MAPL🍁 are now available on GitHub 🎉 https://t.co/yiVq1jivFg Catch me at the conference next week to learn more about our work or just chat about multimodal vision-language 👁️💬 modeling!
1
6
14
@oscmansan
Oscar Mañas @ ICCV
3 years
I'm happy to share our work MAPL🍁 has been accepted to #EACL2023 @eaclmeeting (main track) 🎉 Shout-out to my wonderful co-authors @prlz77, @Saba_A96, @aidanematzadeh, @yashgoyal_ and @aagrawalAA! See you in Dubrovnik 👋
@oscmansan
Oscar Mañas @ ICCV
3 years
I’m excited to share our new work MAPL🍁: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting w/ @prlz77, Saba Ahmadi, @aidanematzadeh, @yashgoyal_ and @aagrawalAA Paper: https://t.co/0xaTepXCxL 🧵👇
4
9
34
@SimonLacosteJ
Simon Lacoste-Julien
3 years
SAIL Montreal (SAIT AI Lab Montreal from Samsung) is happy to sponsor #NeurIPS2022 this week! See https://t.co/h372XBuVcv for our 5 NeurIPS papers; check out our booth at Expo Hall #224 and meet our research scientists! Note: we're still hiring!
0
4
11
@oscmansan
Oscar Mañas @ ICCV
3 years
I’m excited to share our new work MAPL🍁: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting w/ @prlz77, Saba Ahmadi, @aidanematzadeh, @yashgoyal_ and @aagrawalAA Paper: https://t.co/0xaTepXCxL 🧵👇
Tweet card summary image
arxiv.org
Large pre-trained models have proved to be remarkable zero- and (prompt-based) few-shot learners in unimodal vision and language tasks. We propose MAPL, a simple and parameter-efficient method...
2
12
41
@benno_krojer
Benno Krojer
4 years
Can vision & language models retrieve the correct image from a set given its contextual description (e.g. No bridesmaid visible at all)? We show that models struggle with this kind of contextual reasoning https://t.co/napfqeFGrR https://t.co/TkIUABttG1 #ACL2022
3
21
118
@aagrawalAA
Aishwarya Agrawal
4 years
I will be recruiting graduate students for Fall'22. Interested in working on vision-language research? Apply here https://t.co/jDnXRYlSZf. Deadline Dec 1st, 2021. More details on my research: https://t.co/8sGz4rO2GW.
5
55
243
@ayshrv
Ayush Shrivastava
5 years
VQA Challenge 2021 is live! Deadline: May 7. Link: https://t.co/NQDpIoN6bF. Winners to be announced at #CVPR21 VQA Workshop ( https://t.co/fLbZ6mdsuS). Other challenges @ workshop: TextVQA, TextCaps. (1/2)
visualqa.org
CVPR 2021
1
5
19