Luke22R Profile Banner
Luke Rowe Profile
Luke Rowe

@Luke22R

Followers
145
Following
878
Media
9
Statuses
76

Research Intern @Waymo. PhD Student at @Mila_Quebec, focusing on machine learning and autonomous driving.

Montréal, Québec
Joined February 2022
Don't wanna be here? Send us removal request.
@Luke22R
Luke Rowe
2 months
🚀 Our method, Poutine, was the best-performing entry in the 2025 Waymo Vision-based End-to-End Driving Challenge at #CVPR2025!. Our 3 B-parameter VLM Poutine scored 7.99 RFS on the official test set—comfortably ahead of every other entry (see figure).
Tweet media one
3
10
20
@Luke22R
Luke Rowe
2 months
RT @apsarathchandar: @jxmnop Origin of most of these innovations is Canada 🇨🇦 though 😜.
0
1
0
@grok
Grok
4 days
Join millions who have switched to Grok.
213
242
2K
@Luke22R
Luke Rowe
2 months
RT @creus_roger: 🚨 Excited to share our new work: "Stable Gradients for Stable Learning at Scale in Deep Reinforcement Learning"! 📈. We pro….
0
33
0
@Luke22R
Luke Rowe
2 months
RT @GlenBerseth: How can we make behavioural cloning (BC) achieve better combinatorial generalization on out-of-distribution goals?. We pro….
0
15
0
@Luke22R
Luke Rowe
2 months
RT @OWW: Poutine: Vision-Language-Trajectory Pre-Training and Reinforcement Learning Post-Training Enable Robust End-to-End Autonomous Driv….
Tweet card summary image
arxiv.org
We present Poutine, a 3B-parameter vision-language model (VLM) tailored for end-to-end autonomous driving in long-tail driving scenarios. Poutine is trained in two stages. To obtain strong base...
0
2
0
@Luke22R
Luke Rowe
2 months
RT @k_neklyudov: Why do we keep sampling from the same distribution the model was trained on?. We rethink this old paradigm by introducing….
0
26
0
@Luke22R
Luke Rowe
2 months
RT @natolambert: One of the most striking, non-text AI plots I've seen since ChatGPT launched. Scaling keeps working, this time for Waymo's….
0
22
0
@Luke22R
Luke Rowe
2 months
This was joint work with my amazing colleagues at @Mila_Quebec: Rodrigue de Schaetzen, @rogg1111 , @chrisjpal , @duckietown_coo . Check out our report here:
0
0
0
@Luke22R
Luke Rowe
2 months
Why did Poutine work?. • Plug-and-play VLM – Built on Qwen 2.5 VL 3B. No custom perception backbone or action headers needed. • Simple and effective training recipe – Self-supervised vision-language-trajectory pre-training followed by lightweight RL preference fine-tuning.
1
0
0
@Luke22R
Luke Rowe
2 months
This challenge pushed the limits of vision-based end-to-end planning in rare, long-tail scenarios. We show that VLMs can be repurposed into effective planners in the long-tail.
1
0
0
@Luke22R
Luke Rowe
2 months
A residency clause made Québec teams ineligible for prizes, so we couldn’t collect the first-prize—but the challenge organizers awarded us a Special Mention instead. Thanks to the @Waymo challenge organizers for the shout-out!.
1
0
0
@Luke22R
Luke Rowe
2 months
RT @emilianopp_: Excited that our paper "Addressing Concept Mislabeling in Concept Bottleneck Models Through Preference Optimization" was a….
0
22
0
@Luke22R
Luke Rowe
2 months
RT @majdi_has: (1/n)🚨You can train a model solving DFT for any geometry almost without training data!🚨. Introducing Self-Refining Training….
0
39
0
@Luke22R
Luke Rowe
3 months
RT @nandahkrishna: New preprint! 🧠🤖.How do we build neural decoders that are:.⚡️ fast enough for real-time use.🎯 accurate across diverse ta….
0
25
0
@Luke22R
Luke Rowe
3 months
RT @antho_gosselin: 🚗💥Introducing Ctrl-Crash: controllable video generation for autonomous driving! SOTA models struggle to generate physic….
0
14
0
@Luke22R
Luke Rowe
3 months
RT @benjamintherien: Is AdamW the best inner optimizer for DiLoCo? Does the inner optimizer affect the compressibility of the DiLoCo delta?….
0
26
0
@Luke22R
Luke Rowe
3 months
RT @ShashwatGoel7: Confused about recent LLM RL results where models improve without any ground-truth signal? We were too. Until we looked….
0
126
0
@Luke22R
Luke Rowe
3 months
RT @joanrod_ai: Thanks @_akhaliq for sharing our work! Excited to present our next generation of SVG models, now using Reinforcement Learni….
0
41
0
@Luke22R
Luke Rowe
3 months
RT @siddarthv66: Is there a universal strategy to turn any generative model—GANs, VAEs, diffusion models, or flows—into a conditional sampl….
0
25
0
@Luke22R
Luke Rowe
4 months
RT @xichen_pan: We find training unified multimodal understanding and generation models is so easy, you do not need to tune MLLMs at all. M….
0
67
0