Multilingual Multimodal Workshop @ACL 2022
@MML_WKSP
Followers
81
Following
29
Media
0
Statuses
23
Multilingual Multimodal Workshop co-located with ACL 2022 https://t.co/VkcjCHUGDc
Joined December 2021
Presenting FIBER (Fusion In-the-Backbone transformER) a novel V&L architecture w/ deep multi-modal fusion + a new pre-training strategy that first learns through coarse-grained image level objectives, and then obtains fine-grained understanding using image-text-box data.
Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone abs: https://t.co/UgdYW9Cf1g project page: https://t.co/Zl2DQ0IDgG github: https://t.co/2EiUBNMtEd
6
59
279
Come for my inperson poster presentation on Multilingual Tabular Inference (XInfoTabS) @FEVERworkshop #Fever2020 @aclmeeting at 2:00 🕑 PM. We developed first multilingual tabular inference dataset and evaluated multilingual models with several training/testing strategies.
1
1
22
Come join us at MML2 - Workshop on Multilingual Multimodal Learning @ Liffey Hall 2 & on zoom! We had a great talk by @davlanade this morning and many more exciting invited talks lined up! #acl2022nlp ☘️
0
6
31
Join us today at 9:20am (Irish time) for @MML_WKSP, the first Multilingual Multimodal Workshop at #acl2022nlp! We have a fantastic line-up of speakers:
0
10
47
Check out our #NAACL2022 paper "Lifting the Curse of Multilinguality by Pre-training Modular Transformers" where we propose X-Mod, a modular MLM. 📜 https://t.co/97TxrQrEz4 ⌨️ https://t.co/mroChWDr3p w\ @NamanGoyal21 @VictoriaLinML @xl_nlp JamesCross @riedelcastro @artetxem
6
28
193
Congrats to the recipients of the @Wikimedia Research Award of The Year!! 🎉"WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning" Srinivasan et al 🎉"Assessing the quality of sources in @Wikidata across languages: a hybrid approach" Amaral et al
1
23
57
We have extended the submission deadline for the Multilingual Multimodal Workshop to March 06, 11:59 pm UTC! A quick reminder that #MML2022 also accepts the current round of ARR reviewed papers to be submitted to our workshop through @ARRPreprints!
We’re excited that the 1st Workshop on Multilingual Multimodal Learning #MML2022 will be co-located with ACL @aclmeeting Call for papers: February 28, 2022 https://t.co/WVZlEIGYEC
0
5
6
January reviews are out! A quick reminder that #MML2022 also accepts this round of reviews to be submitted to our workshop through @ARRPreprints ! We are excited to read your multimodal and multilingual work! https://t.co/WVZlEIGYEC
mml-workshop.github.io
Co-located with ACL 2022
The modified January timeline for ARR is posted: https://t.co/Iiceh8ykwa. Most people can expect their ARR (meta)reviews in a few hours. #NLProc
0
6
6
#MML2022 is hosting a shared task on multilingual visually grounded reasoning! The task will be centered around MaRVL, a multicultural and multilingual V&L dataset which extends the NLVR2 task. Submission Due: April 30 2022 🌐 https://t.co/ejUg2HHnSu
https://t.co/5tIFSt1XFb
mml-workshop.github.io
Co-located with ACL 2022
Is multimodal technology mature enough to be used around the world? We introduce MaRVL, a multilingual and multicultural dataset for vision-and-language reasoning! @hardy_qr @PontiEdoardo @sivareddyg @nigelhcollier @delliott 🗣️ #EMNLP2021 🌐 https://t.co/j1QA2Yk79Q
0
12
30
Check out our new *multilingual* AND *multimodal* benchmark covering 4 tasks in 20 languages! We perform extensive experimentation and (among other things) uncover that few-shot findings of text-only multilingual tasks don’t necessarily translate to the multimodal domain 🧐 👇
Voilà IGLUE🧊 The Image-Grounded Language Understanding Evaluation benchmark 📈 IGLUE brings together 4 vision-and-language tasks across 20 languages And, brr, is it cold outside the Anglosphere 🥶 📄 https://t.co/XXk3y8fLzH 👩💻 https://t.co/axXixaTe8G 🌐 https://t.co/TM26acRH9x
0
13
66
This paper required a Herculean effort, but it was worth it! The aspect that I like the most is that it enables transfer learning along 3 different axes: languages, tasks, and modalities
Voilà IGLUE🧊 The Image-Grounded Language Understanding Evaluation benchmark 📈 IGLUE brings together 4 vision-and-language tasks across 20 languages And, brr, is it cold outside the Anglosphere 🥶 📄 https://t.co/XXk3y8fLzH 👩💻 https://t.co/axXixaTe8G 🌐 https://t.co/TM26acRH9x
0
4
29
Voilà IGLUE🧊 The Image-Grounded Language Understanding Evaluation benchmark 📈 IGLUE brings together 4 vision-and-language tasks across 20 languages And, brr, is it cold outside the Anglosphere 🥶 📄 https://t.co/XXk3y8fLzH 👩💻 https://t.co/axXixaTe8G 🌐 https://t.co/TM26acRH9x
4
38
163
Our paper on mapping LMs to grounded conceptual spaces was accepted to #ICLR2022! We study how well a conceptual space learned from text (e.g., by large LMs) can be mapped onto a grounded conceptual space (e.g., a world of colours) with only a small number of in-concept samples.
5
26
269
The deadline for submitting to the Multilingual Multimodal Workshop is February 28. Submissions will be handled through @ReviewAcl. We will also have four brilliant speakers: @davlanade, Lisa Anne Hendricks, Lei Ji, and @PreethiJyothi1 Find out more:
mml-workshop.github.io
Co-located with ACL 2022
1
8
28
Thrilled to announce that the first workshop on Multilingual and Multimodal Learning (#MML2022) will be held at #ACL2022 More info coming soon! W/ @kaiwei_chang @delliott @gspandana @ashkamath20 @LiLiunian @hardy_qr @PfeiffJo @PontiEdoardo @krishna2 @licwu @yinfeiy @Wade_Yin9712
1
30
112
This year’s organizers are: @ebugliarello @kaiwei_chang @delliott @gspandana @ashkamath20 @LiLiunian @hardy_qr @PfeiffJo @PontiEdoardo @krishna2 @licwu @yinfeiy @Wade_Yin9712
0
1
8
•Datasets for multilingual multimodal learning •Modeling multilingual multimodal Data •Approaches to improving the inclusion of multilingual multimodal models •Evaluation and analysis for multilingual multimodal learning •Future challenges of multilingual multimodal research
1
1
7
This workshop encourages and promotes research efforts towards more inclusive multimodal technologies and tools to assess them. We invite papers which focus on the topics of interest include (but are not limited to):
1
1
5