valeoai Profile Banner
valeo.ai Profile
valeo.ai

@valeoai

Followers
1K
Following
511
Media
167
Statuses
483

We are a research team on artificial intelligence for automotive applications working toward assisted and autonomous driving.

Paris, France
Joined December 2019
Don't wanna be here? Send us removal request.
@valeoai
valeo.ai
8 months
🚗 Ever wondered if an AI model could learn to drive just by watching YouTube? 🎥👀 We trained a 1.2B parameter model on 1,800+ hours of raw driving videos. No labels. No maps. Just pure observation. And it works! 🤯 🧵👇 [1/10]
6
50
179
@valeoai
valeo.ai
3 hours
@EloiZablocki @AlexandreBoulch @Mickael_Chen @quobbe @sophia_sirko @SpyrosGidaris @AVobecky @abursuc @thomenicolas1 @yasserbenigmim @tuan_hung_vu @MrzSalehi @shawshank_v @ioanasimioni @egavves @cgmsnoek @y_m_asano Analyzing Fine-tuning Representation Shift for Multimodal LLMs Steering Alignment tl;dr: a new method for understanding and controlling how MLLMs adapt during fine-tuning by: @PKhayatan, @MustafaShukor1, @JayneelPar95709, @quobbe 📄: https://t.co/ntRauYMskx
0
0
3
@valeoai
valeo.ai
3 hours
@shawshank_v
Shashank
4 months
New paper out - accepted at @ICCVConference We introduce MoSiC, a self-supervised learning framework that learns temporally consistent representations from video using motion cues. Key idea: leverage long-range point tracks to enforce dense feature coherence across time.🧵
1
0
1
@valeoai
valeo.ai
3 hours
Tweet card summary image
arxiv.org
In this paper, we challenge the conventional practice in Open-Vocabulary Semantic Segmentation (OVSS) of using averaged class-wise text embeddings, which are typically obtained by encoding each...
@yasserbenigmim
Yasser Benigmim
4 months
🎉 Excited to share that our paper "FLOSS: Free Lunch in Open-vocabulary Semantic Segmentation" got accepted at #ICCV2025! A collaborative effort with : Mohammad Fahes @tuan_hung_vu @abursuc and Raoul de Charette.
1
0
2
@valeoai
valeo.ai
3 hours
@sophia_sirko
Sophia Sirko-Galouchenko
4 months
1/n 🚀New paper out - accepted at @ICCVConference! Introducing DIP: unsupervised post-training that enhances dense features in pretrained ViTs for dense in-context scene understanding Below: Low-shot in-context semantic segmentation examples. DIP features outperform DINOv2!
1
0
2
@valeoai
valeo.ai
3 hours
GaussRender: Learning 3D Occupancy with Gaussian Rendering tl;dr: a module for 3D occupancy learning that enforces 2D-3D consistency through differentiable Gaussian rendering by: L. Chambon, @EloiZablocki, @AlexandreBoulch, @Mickael_Chen, @quobbe 📄: https://t.co/qWfiWYltPz
1
0
3
@valeoai
valeo.ai
3 hours
Our recent research will be presented at #ICCV2025 @ICCVConference! We’ll present 5 papers about: 💡 self-supervised & representation learning 🌍 3D occupancy & multi-sensor perception 🧩 open-vocabulary segmentation 🧠 multimodal LLMs & explainability https://t.co/Tg0Vx3oS94
1
1
6
@valeoai
valeo.ai
10 days
The PhD graduation season in the team goes on! Today Corentin Sautier is defending his PhD on "Learning Actionable LiDAR Representations without Annotations". Good luck! 🚀
@mtmthh
tetianka
10 days
Another great event for @valeoai: a PhD defense of Corentin Sautier. His thesis «Learning Actionable LiDAR Representations w/o Annotations» covers the papers BEVContrast (learning self-sup LiDAR features), SLidR, ScaLR (distillation), UNIT and Alpine (solving tasks w/o labels).
2
2
15
@valeoai
valeo.ai
11 days
“Has anyone heard about DUSt3R?” All hands and hearts up in the room. Honored to welcome @kgcs96 today to speak about the amazing work @naverlabseurope towards 3D Foundation Models
0
0
7
@valeoai
valeo.ai
11 days
It’s PhD graduation season in the team! Today, @Bjoern_Michele is defending his PhD on "Domain Adaptation for 3D Data" Best of luck! 🚀
1
5
20
@valeoai
valeo.ai
15 days
Congratulations to our lab colleagues who have been named Outstanding Reviewers at #ICCV2025 👏 Andrei Bursuc @abursuc Anh-Quan Cao @AnhQuanCAO Renaud Marlet @RenaudMarlet Eloi Zablocki @EloiZablocki @ICCVConference 🔗
0
3
12
@valeoai
valeo.ai
23 days
@fbartoc @EliasRamzi PPT: Pretraining with Pseudo-Labeled Trajectories for Motion Forecasting 📄 Paper: https://t.co/kME3MXDYU9 by Y. Xu, @yuanyinnn , @EloiZablocki, @tuan_hung_vu @AlexandreBoulch, @quobbe
0
0
5
@valeoai
valeo.ai
23 days
🇰🇷 CoRL 2025 is just around the corner in Seoul, Korea! We're excited to present our latest research and connect with the community. #CoRL2025
1
2
8
@elsa_lighthouse
ELSA - European Lighthouse on Secure and Safe AI
2 months
Do you want to learn about #certifiable #AI, technical #robustness and #safety, #privacy and #infrastructures, or #human #agency and #oversight? Our "##Courses and Tutorials" page has got you covered! It's covering those topics – and many more. https://t.co/34SqHnsloI #ELSAAI
0
2
5
@elsa_lighthouse
ELSA - European Lighthouse on Secure and Safe AI
3 months
What is happening within the ELSA Use Cases?🤔Let us show you!💡 For our new series, we interview leading researchers and members of the ELSA #Use #Cases for a #research #progress #deep #dive. Starting with Tuan-Hung from @valeoai. https://t.co/n95ekRuzFL
0
3
5
@y_m_asano
Yuki
3 months
Today we release Franca, a new vision Foundation Model that matches and sometimes outperforms DINOv2. The data, the training code and the model weights (with intermediate checkpoints) are open-source, allowing everyone to build on this. Methodologically, we introduce two new
@shawshank_v
Shashank
3 months
Can open-data models beat DINOv2? Today we release Franca, a fully open-sourced vision foundation model. Franca with ViT-G backbone matches (and often beats) proprietary models like SigLIPv2, CLIP, DINOv2 on various benchmarks setting a new standard for open-source research🧵
3
24
174
@valeoai
valeo.ai
3 months
We're releasing Franca ("free" one): a high-performing open-source vision foundation model. Franca is the outcome of a close collaboration between @valeoai (in France) and @FunAILab (in Franconia). Check out the thread for more info on main ingredients and results 👇
@shawshank_v
Shashank
3 months
Can open-data models beat DINOv2? Today we release Franca, a fully open-sourced vision foundation model. Franca with ViT-G backbone matches (and often beats) proprietary models like SigLIPv2, CLIP, DINOv2 on various benchmarks setting a new standard for open-source research🧵
0
2
26
@abursuc
Andrei Bursuc
3 months
We're releasing Franca: a new fully open-sourced (data, code, weights, logs) vision foundation model that finally matches & sometimes outperforms DINOv2, SigLIPv2 & CLIP on ViT-G. This is the fruit of a fun collaboration btwn @valeoai & @FunAILab spearheaded by @shawshank_v 👇
@shawshank_v
Shashank
3 months
Can open-data models beat DINOv2? Today we release Franca, a fully open-sourced vision foundation model. Franca with ViT-G backbone matches (and often beats) proprietary models like SigLIPv2, CLIP, DINOv2 on various benchmarks setting a new standard for open-source research🧵
2
13
104
@abursuc
Andrei Bursuc
3 months
Video recordings from our workshop on Embodied Intelligence and tutorial on Robotics 101 @CVPR are now up, just in time to catch up with things over the summer. Enjoy! #CVPR2025
@OpenDriveLab
OpenDriveLab
3 months
📹Our #CVPR2025 workshop and tutorial recordings are now online! Big thanks to our incredible speakers! Watch all the sessions here 🔗 Workshop: https://t.co/xLbnLvOVYM 🔗 Tutorial: https://t.co/17QDuODLz4 🏟️But we’re not done yet - our workshop continues at #ICCV2025! And the
0
7
22
@abursuc
Andrei Bursuc
3 months
So, yesterday I attended remotely the workshop on Foundation Models around topics of @elias_project, @elsa_lighthouse, ELLIOT projects, kindly organized by @kompats  & team in Thessaloniki. It had awesome speakers, great talks, nice European vibes: we want more! Here's a recap 🧵
1
3
11