Andrea Banino Profile
Andrea Banino

@AndreaBanino

Followers
2K
Following
1K
Media
5
Statuses
255

I work at @DeepMind, I’m a machine learning researcher working on artificial general intelligence. I also want to understand how our brain works.

London, England
Joined August 2010
Don't wanna be here? Send us removal request.
@AndreaBanino
Andrea Banino
1 year
Work done by amazing people at GDM and led by @sahandsharif
0
0
0
@AndreaBanino
Andrea Banino
1 year
8/ πŸŽ“ Key Contributions: β€’ Fully synthetic high-quality text-image pairs for VLM training β€’ Efficient image embedding generation β€’ Controlled study for fair evaluation β€’ Significant performance gains in scene description, QA, and external knowledge QA
1
0
1
@AndreaBanino
Andrea Banino
1 year
7/ πŸ”¬ By pre-training both the text-to-image model and VLM on the same dataset, we isolate the true benefits of synthetic images, ensuring improvements are due to our method, not pre-existing knowledge.
1
0
0
@AndreaBanino
Andrea Banino
1 year
6/ 🌍 Our work opens up promising avenues for developing self-improving multimodal models, addressing data scarcity, high curation costs, and noise in traditional datasets.
1
0
0
@AndreaBanino
Andrea Banino
1 year
5/ ⚑ Synthesizing images in the embedding space is 25% faster than in the pixel space, reducing memory overhead and resource consumption without compromising data quality.
1
0
0
@AndreaBanino
Andrea Banino
1 year
4/ πŸ”„ We found that semantic diversity and balance in captions are crucial for better downstream performance. Our analysis provides new insights into optimizing synthetic data for VLM training.
1
0
0
@AndreaBanino
Andrea Banino
1 year
3/ πŸ“Š Extensive experiments show our VLM, finetuned on synthetic data, performs comparably to models trained on human-annotated data, but with significantly less data! This demonstrates the power and efficiency of our synthetic approach.
1
0
0
@AndreaBanino
Andrea Banino
1 year
2/ πŸ–ΌοΈ Our method employs a pretrained text-to-image model to generate image embeddings from LLM-generated captions. This approach expands beyond the original dataset, creating novel compositions that enrich VLM training data.
1
0
0
@AndreaBanino
Andrea Banino
1 year
1/ πŸ” Tackling the bottleneck of high-quality human-labeled datasets for Visual-Language Models (VLMs), we propose a novel approach using Large Language Models (LLMs) and image generation models to create synthetic image-text pairs.
1
0
0
@_akhaliq
AK
2 years
Google announces Synth^2 Boosting Visual-Language Models with Synthetic Captions and Image Embeddings The creation of high-quality human-labeled image-caption datasets presents a significant bottleneck in the development of Visual-Language Models (VLMs). We propose a novel
6
80
347
@AndreaBanino
Andrea Banino
3 years
That’s on again!sign-up if you want to have the opportunity to learn the most recent breakthroughs in ML!bonus: this time is by the sea πŸŒŠπŸ„β€β™‚οΈ
@M2lSchool
M2L school
3 years
We are excited to announce the 3rd edition of the Mediterranean Machine Learning (M2L) summer school in August 2023! This year, the school will take place at @actgreece in Thessaloniki, Greece. Apply at
0
0
1
@xbresson
Xavier Bresson
4 years
After 2.5+ years of pandemic, I will attend my first in-person meeting in September! https://t.co/Eejm2IHZyG So glad to meet people again! :) Thanks @AndreaBanino and the organizers for the invitation.
Tweet card summary image
m2lschool.org
Mediterranean Machine Learning Summer School University of Split Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture Split, Croatia 8-12 September 2025
0
3
22
@AndreaBanino
Andrea Banino
4 years
Announcing the 1st Dynamic Neural Networks (DyNN) workshop, a hybrid event @icmlconf 2022! πŸ‘‡ We hope DyNN can promote discussions on innovative neural networks that can deal with dynamic computations. Want to learn more?
Tweet card summary image
dynn-icml2022.github.io
Friday, July 22 - 2022 International Conference on Machine Learning - Baltimore, MD
0
17
58
@AndreaBanino
Andrea Banino
4 years
I’m very happy to present this work that has been accepted as spotlight paper at ICLR. Train large models in RL will be critical to learn better value functions, we are proposing a way to make this learning more efficient in terms of data. Check it out!
@GoogleDeepMind
Google DeepMind
4 years
CoBERL, a simple and scalable method to improve data efficiency in a variety of RL environments (Atari, DmControl, DmLab). Lean more today https://t.co/FCtZn8u2Gx #ICLR2022
0
1
2
@allisontam_
Allison Tam
4 years
New paper! Language and large foundation models come together to drive semantically meaningful exploration. This idea helps RL agents learn faster in 3D environments, even when language annotations are unavailable ( https://t.co/ez0PGuwFXC) Read on πŸ”Žβ¬‡οΈ
7
84
445
@M2lSchool
M2L school
4 years
Last few days to sign up to the M2L summer school!Remember, this year it will be free for all students thanks to our amazing sponsors! Apply at https://t.co/ZmwcJkyxG6! #AI #ML #RL #deeplearning #machinelearning #M2Lschool 1/2
1
8
19
@M2lSchool
M2L school
4 years
Two weeks left to apply to the M2L school! Lots of lectures, tutorials, great speakers, and much more! Apply here: https://t.co/7Bgn5XMhoQ All info at https://t.co/pOybcCNyaV
0
8
19