Prof
@Stanford
, Distinguished Research Scientist and AV research lead
@nvidia
. PhD from
@MITAeroAstro
. Robotics, autonomous systems, AI. Opinions are my own.
The IEEE Control Systems Magazine is featuring this month our tutorial on convex optimization for trajectory generation -- joint work with Behçet Açıkmeşe
@UWaeroastro
. We discuss many applications, including rocket landing. Pre-print here (with code!):
We are hiring! The AV Research Group at
@nvidia
() has several open positions for full-time research scientist roles! AV foundation modes, AI safey, and next-gen AV architectures are just some of our research directions. Apply here:
We (the Autonomous Vehicle group at NVIDIA Research) have resumed hiring full-time research scientists. Submit here if you are interested!
Please contact
@drmapavone
(our respected group leader! 😀) for more details.
My research group
@nvidia
() is now accepting applications for internships: Please consider applying if you are interested in pushing the boundaries of the state of the art in vehicle autonomy!
Our group () led by
@drmapavone
is looking for PhD research interns for next year. If you’re excited about 3D deep learning, motion planning, and control for robotics/autonomous driving, please consider applying to our group!
We are happy to announce the first Vision and Language for Autonomous Driving and Robotics (VLADR) workshop at
@CVPR
2024!
Call for contributions and more details 👇🏻
See you in Seattle! 😃
Introducing SMERF: an effective method to leverage Standard Definition (SD) maps for real-time lane-topology understanding. Great work led by
@nvidia
fellowship awardee
@katielulula
More details here:
Self driving relies on costly HD maps for real-time lane-topology understanding; can we utilize Standard Definition (SD) maps that are more affordable? Introducing SMERF (SD Map Encoder Representations from transFormers)!
Website: [1/6]
Applications are now being accepted for the
@NVIDIA
Graduate Fellowship Program for the 2023-2024 academic year! I hope to see many applicants interested in pushing the boundaries of robot autonomy:
We have open sourced ! It's a new, unified interface to many trajectory forecasting datasets, greatly simplifying the process of training and evaluating a forecasting model on multiple motion datasets!
@iamborisi
@NVIDIADRIVE
Interested in applying modern numerical optimization to PyTorch-learned models? Check out Learning for CasADi, a new framework that seamlessly integrates PyTorch-learned models with CasADi. Great work led by
@TimSalzmann
See below for an example of traj opt with NeRF models.
Unlock the power of data-driven models in numerical optimization and optimal control with L4CasADi! :rocket:
Check out our latest example showcasing how L4CasADi can optimize a collision-free trajectory through a learned Neural Radiance Field (NeRF).
#OptimalControl
#L4CasADi
We are excited to announce the release of Traffic Behavior Simulation (TBSIM) developed by the Nvidia Autonomous Vehicle research group (), which is our software infrastructure for closed-loop simulation with data-driven traffic agents. (1/7)
Interested in making your AI-based robot resilient to out-of-distribution events? Join us at our workshop
@corl_conf
to discuss key open problems and promising research avenues: We have a fantastic lineup of speakers! Papers due 10/6/23.
📢 Announcing the first
@corl_conf
workshop on Out-of-Distribution Generalization in Robotics: Towards Reliable Learning-based Autonomy!
#CoRL2023
🎯 How can we build reliable robotic autonomy for the real world?
📅 Short papers due 10/6/23
🌐
🧵(1/4)
How do you use a quantum computer when you may not know how well it works? We present a hybrid quantum-classical optimization algorithm that has limited requirements on the quality of solutions returned from the quantum computer: (1/3)
#ConformalPrediction
holds a lot of promise to provide calibrated uncertainty estimation across an autonomy stack. We explored this idea in the context of pose estimation (
#CVPR2023
highlight paper). We are now targeting several others tasks.
@NVIDIAAI
My lab will participate in
#ShutDownSTEM
tomorrow. More broadly, my lab
@StanfordASL
and I are working on a number of activities to make the STEM field more inclusive, and to foster a more just and prosper society through our research and engineering expertise. Stay tuned.
Excited by our new tutorial on convex optimization techniques for trajectory optimization: Among other things, we provide source code for several examples. A great collaboration with Behçet Açikmeşe and his group
@UWaeroastro
With
@thomas__lew
#Robotics
Releasing Agent-Driver, led by our students
@PointsCoder
@JunjieYe9
and w/ James Qian and
@drmapavone
. Agent-Driver is a complete paradigm shift from the common perception-prediction-planning pipeline. We propose an agent-based autonomous driving approach that capitalizes on LLMs
Excited by our latest work on neural representations: EmerNeRF. EmerNeRF learns scene decomposition and flow estimation, all from self-supervision! And lifts foundation model features to 4D space-time, enabling semantic tasks. Soon applications to e2e driving, sim, and more.
Introducing EmerNeRF, our answer to the challenging dynamic NeRF in-the-wild problem. EmerNeRF is the best-ever project I've got involved in, led by our only
@JiaweiYang118
(stay tuned for his even more impressive works), in collaboration with
@NVIDIAAI
colleagues
@iamborisi
Being involved in the design of the Mars 2020 mission while a roboticist at
@NASAJPL
was one of the coolest experiences in my life. Here is a short summary of the mission where I discuss, in particular, the Mars Helicopter
@Stanford
A particularly exciting application of LLMs is to use them as _monitors_ to reason about unusual situations that defy the primary decision-making pipeline of a robot. Check out our paper for our recent results on this topic!
@StanfordAILab
🔍 How can we detect system-level reasoning failures to improve the robustness of robotic systems in safety-critical settings?
We use LLMs as intelligent runtime monitors to reason over and identify potentially problematic elements in a scene! 🧠
Watch
#DRIVELabs
to learn about EmerNeRF, a method for reconstructing dynamic driving scenarios. EmerNeRF builds upon
#NeRF
(Neural Radiance Field) and extends it with self-supervised learning.
Excited that our project to design robots for the exploration of Martian caves has been selected by
@NASA
for initial study
@NASAIAC
@StanfordASL
. This is a collaboration with
@bdmlstanford
More here:
EmerNeRF will be a cornerstone, among other things, for our work on ultrarealistic neural simulation - more on this soon! Great work led by
@yuewang314
We are launching a new
#virtual
#robotics
seminar “Robotics Today--A Series of Technical Talks”. Andrew Davison
@AjdDavison
will give the first seminar “From SLAM to Spatial AI” on Friday May 15th 1PM EST.
Watch and learn more about the seminars here:
Excited by our new project aimed at demonstrating a scalable 24/7 carbon-free mobility solution with the Stanford electrified bus fleet: This is the culmination of
@StanfordASL
's work on EV mobility, initiated by
@FRossi314
a few years ago!
@StanfordEng
With Real-time Neural MPC you can efficiently integrate large, complex neural network architectures as dynamics models in an MPC-pipeline. Compared to prior implementations we can leverage neural networks with a 4000x larger parametric capacity in a 50Hz real-time framework.
We welcome and encourage participants to submit their work to the 1st ICCV workshop "Neural Fields for Autonomous Driving and Robotics". Please visit our website at for detailed submission guidelines.
#ICCV
#Robotics
#AutonomousDriving
#NeRF
Coping with out-of-distribution data is one of the grand challenges in robot autonomy. A position paper from my lab
@StanfordASL
provides an overview of this emerging research area and a roadmap for future work. Great work by
@RohanSinhaSU
with coordinating this effort!
Out-of-distribution inputs derail predictions of ML models. How can we cope with OOD data in robotics? How do we even define what makes data OOD?
We provide a perspective paper arguing a system-level view of OOD data in robotics! 🧵 (1/5)
Now on Arxiv:
Thanks, Francis. Indeed, jointly with
@jmes_harrison
and the rest of the
@StanfordASL
we are writing a book on optimal and learning-based control, and one of the goals is to uncover connections among different techniques. Stay tuned!
Came across this tweet today and that's so true😅But the good thing is during every iteration I can revisit
@MarcoPavoneSU
's AA203 materials to better understand how optimisation and control interact with RL.
How to use AI models in safety-critical autonomous systems with high confidence? Join the
#GTC23
session below to learn about the most recent results from the
@nvidia
's Autonomous Vehicle Research group: .
@NVIDIADRIVE
@NVIDIAAI
Excited to share Text2Motion - a framework that leverages large language models to solve sequential manipulation tasks requiring complex, long-horizon reasoning. A great collaboration with
@leto__jean
's lab.
@StanfordEng
Large language models (LLMs) can readily convert language instructions into high-level plans.
However, should we trust robots to execute these plans without verifying that they actually satisfy the instructions and are feasible in the real world?
Excited that our project on climbing robots for Mars exploration has been selected by
@NASA_Technology
#NIAC
. This is a collaboration with
@bdmlstanford
More information here:
✅an inflatable bird-like drone to study Venus' atmosphere
✅spacecraft with enhanced radiation protection for its crew
✅a deployable rotating habitat with artificial gravity
Sound like sci-fi?
We selected these futuristic space tech concepts for study:
MultiModal
#LLMs
meet
#AutonomousVehicles
?
We're thrilled to share our latest work: 🐬Dolphins, a vision-language model aiming to provide Human-like capabilities such as fast adaption and reflection for autonomous driving. 🚗
How to monitor the "good" behavior of deep neural networks (DNNs) in modern autonomy stacks? Check out SCOD: a model agnostic approach to adding an efficient confidence monitor to pre-trained DNNs. With
@apoorva__sharma
&
@NavidAzizan
@UAI2021
Very proud of
@StanfordEng
Aero/Astro alumnus Marcos Berríos, who has recently been selected as
@NASA
Astronaut Candidate. I had the pleasure to be his instructor and serve on his PhD thesis committee. Godspeed!
Excited that my proposal to establish an aviation autonomy center has been selected for award as part of the
@NASAaero
's ULI program! Together will several partners from academia and industry we will work on tools for safe AI for future aviation systems:
Interested in robotics education?
@DrCABerry
will speak tomorrow on "Robotics Education to Robotics Research" at Robotics Today
@RoboticsSeminar
(12pm PST)! A unique opportunity to reflect on the most effective ways to teach robotics.
The
@Stanford
Aero/Astro Department is hiring! The Department invites applications for a tenure track faculty position at the Assistant or untenured Associate Professor level. More info here . Deadline: December 4, 2023.
@StanfordEng
I am thrilled to announce that I have been awarded the NVIDIA Fellowship for 2024-2025! I would like to express my deepest gratitude to
@nvidia
for providing me with such a remarkable opportunity!
(1/3) Control Systems Magazine Outstanding Paper Award for the paper, "Convex Optimization for Trajectory Generation: A Tutorial on Generating Dynamically Feasible Trajectories Reliably and Efficiently" w/
@thomas__lew
, R. Bonalli, and collaborators at UW.
Introducing categorical traffic transformers: a scene-centric traffic model that can be seamlessly integrated with large language models to reason about highly complex traffic situations.
@Yuxiao_Chen_
@SanderTonkens
A key step in our AV foundation model strategy.
@NVIDIADRIVE
Introducing Categorical Traffic Transformer (CTT) w/
@SanderTonkens
@drmapavone
, which is our effort towards a traffic model that can be easily integrated with LLMs.
Paper link:
Code link:
Before that, I will be a Research Scientist in the Autonomous Vehicles Research Group
@NVIDIAAI
led by
@MarcoPavoneSU
starting July 2022. Deep thanks to all the people who supported me, most importantly, my amazing advisor
@lucacarlone1
!
Interested in rigorous uncertainty quantification for neural networks? Check out our
#NeurIPS23
paper on combining conformal prediction and PAC-Bayes theory to obtain statistical guarantees on coverage and efficiency!
Paper:
Code:
Modelling the nuanced behaviors of human agents in simulation is one of the grand challenges in robotics. One of our latest models (BITS) makes an important step in this direction. Even better, code is open-sourced:
@iamborisi
@danfei_xu
@Yuxiao_Chen_
Bi-Level Imitation for Traffic Simulation (BITS) is a traffic model that captures the complexity of the real world with incredible fidelity while also outperforming previous methods. Learn more:
A key challenge in robot autonomy is to find ways to quickly and inexpensively adapt key AI-based modules to new environments. Check out our take on this problem in the context of trajectory forecasting.
@jmes_harrison
@iamborisi
Want to deploy your behavior prediction model in many different cities without labeling tons of data? Check out our latest work combining recurrent behavior prediction models with adaptive meta-learning! 🧵
Our latest work on verifying *and* synthetizing robust
control barrier functions is out! We exploit very useful connections between our setting and global optimization of (min-max) polynomial optimization problems.
@hankyang94
Excited to share a new preprint:
Verification and synthesis of ROBUST control barrier functions for control-affine polynomial systems with bounded state-dependent additive uncertainty and convex polynomial control constraints.
Three key techniques👇
Check out the Stanford story about our recent robotics experiments on the
@Space_Station
In the video, one can see the ground station we set up within the
@StanfordASL
lab, through which we could monitor the experiments in space in real time! With
@ACauligi
@bdmlstanford
Friends in high places.
Stanford researchers sent a robotic gripper, which mimics Gecko feet and could be well-suited for cleaning up space junk, to the International
@Space_Station
for testing. On board to help: Stanford alumna and
@NASA
astronaut Kate Rubins, PhD ’06.
Here is an interview I gave about the drone copter to take flight for the first time on Mars I was involved in the NASA's 2020 Mars mission while a research technologist
@NASAJPL
, working on the problem of landing site selection - godspeed!
#Robotics
Our next talk:
Feb 3: Marco Pavone (Stanford)
@MarcoPavoneSU
“Safe, Interaction-Aware Decision Making and Control for Robot Autonomy”
Please visit ………… for more information (join the Google group for Zoom link and future announcements).
Check out our recent paper on unifying SDP relaxations for ReLU neural network verification by providing an exact convex formulation: With
@robin_a_brown
@NavidAzizan
@StanfordASL
Can we verify the safety of a deep neural network for deployment in safety-critical settings?
This is a non-convex problem in general, and there have been many existing relaxations constructed for it.
1/2
I look forward to discussing about AI & the future of mobility with colleagues from Italy and the US. This is part of an exciting webinar series organized by
@ItalyinSanFran
@ItalyinUS
, that will surely foster new
#Italy
/US collaborations
@issnaf
!
On February 9th at 4PM ET, join the first of the
#Italy
/US AI Webinar series! Experts in artificial intelligence from both Italy and the US will discuss how AI can be instrumental for future urban mobility
#ItalyUS160
@NSF
@ItalyinSanFran
. Register here: .
excited to be working with
@MarcoPavoneSU
on an
#NSFfunded
project on mass spray disinfection strategies using drones for covid-19 control! more results to come soon!
@lab_tang
@NSF
We are organizing a workshop on
bridging learning and algorithmic fairness in the operation of urban infrastructure systems at the next CPS-IoT Week on May 9. More info here: (including call for papers).
#CPSIoTWeek
@alexandrebayen
@DevanshJalota
@jr_laz
The Department of Aeronautics and Astronautics at
@Stanford
is still accepting applications for a tenure track faculty position. Firm deadline: January 5, 2024. Please apply if interested:
Excited to release FreeNeRF, a lightweight method to tackle the few-shot neural rendering problem. FreeNeRF achieves SOTA performance with *minimal* modifications to plain NeRF. More in the thread below:
@yuewang314
@JiaweiYang118
Project page:
#CVPR2023
We are finally ready to release our first NeRF study “FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization”
Paper:
Project page:
Code & pre-trained models:
(1/9)
If you are an early-career Italian researcher working in North America, I encourage you to apply to the
@issnaf
awards. I received the ISSNAF Franco Strazzabosco Award for Engineers in 2017, and it was a big honor!
Science and research will shape our future and youth will lead the path💡🔬
#ISSNAF
empowers and supports young 🇮🇹 investigators in 🇺🇸 and 🇨🇦 by awarding annually their promising studies.
Here are this year’s categories 👉🏻 .
There’s still time to apply!
We design domain-specific adversarial robustness for
#autonomous
#driving
We show downstream results: it reduces serious accident rates (e.g., collisions and off-road driving) under attacks by 100%, compared to non-robust models.
@NVIDIAAI
Can we discover structure & meta-learn across it in unsegmented time series data? MOCA simultaneously detects changepoints & meta-learns across time for continuous adaptation
Continuous Meta-Learning without Tasks
w
@jmes_harrison
, Sharma,
@MarcoPavoneSU
Stunning images of our recent tests on the
@Space_Station
where we demonstrated our gecko-inspired gripper and in particular its unique capabilities to grasp objects in space! With my talented student
@ACauligi
and
@StanfordASL
&
@bdmlstanford
We’ve got a grip!
@AstroVicGlover
helped Honey, one of our free-flying Astrobee 🤖 robots, test a new adhesive tech called the gecko gripper. The device will give the robotic assistants the ability to take on even more tasks aboard the
@Space_Station
:
Excited to be promoted to Associate Professor with tenure! This is a recognition as much for the entire Stanford Autonomous Systems Laboratory, which I have the honor to direct:
I take this opportunity to thank all my collaborators throughout these years.
Excited to release our dataset, developed under a
@NASAaero
ULI grant and based on the
@XPlaneOfficial
simulator, to stress test learning-based perception & control modules for aviation autonomy: . A great collaboration between
@StanfordASL
&
@SISLaboratory
Yesterday I successfully defended by Ph.D. thesis in Aeronautics and Astronautics at Stanford! Thanks to everyone who supported me during this journey, especially
@MarcoPavoneSU
and all the fantastic members of
@StanfordASL
! 🚀
If you are interested in hearing about my latest work on interaction-aware decision making for autonomous cars, tune in today at 6 pm Pacific Time
#autonomousdriving
#autonomoussystems
I am happy to share that I will soon be joining
@nvidia
as a research scientist to work with
@MarcoPavoneSU
et al. on pushing the frontiers of autonomous vehicles; very excited to work on new challenging problems in the AV domain! (1/3)
Researchers are working on a robot concept that could change the way we explore Mars. ReachBot’s unique range of mobility could enable it to traverse deep pits, steep cliffs and other rugged features, making unexplored regions of Mars within reach:
(3/3) I received the 2023 CSS Award for Technical Excellence in Aerospace Control for "outstanding contributions to optimal control and decision making and their application to aerospace robotics.”
How can we use meta-learning algorithms in continuously changing environments?
At NeurIPS at 9AM PT today: "Continuous Meta-Learning without Tasks" w/
@apoorva__sharma
,
@chelseabfinn
,
@MarcoPavoneSU
Video + Poster:
Paper:
How to seamlessly blend trajectory forecasting with downstream planning? Check out our latest work MATS to see how one can dramatically improve planning performance via a novel trajectory forecasting representation.
@iamborisi
@adnothing
@AmineElhafsi
We've updated the code for our CoRL 2020 paper about MATS, a new interpretable trajectory forecasting representation for planning and control, now including the multimodal MPC planner in
#julialang
! Check it out today!
Using San Francisco as a model, the algorithm created by the
@Stanford
researchers is able to schedule
#drones
in such a way that hopping around on the bus system allows them to make deliveries anywhere in under an hour.
#icra2020
#robotics
(2/3) 2023 Conference on Decision and Control Outstanding Student Paper Award for the paper, “Exact Characterization of the Convex Hulls of Reachable Sets” w/
@thomas__lew
and R. Bonalli
Interested in how AI is helping to boost space exploration? Join the
@AIforGood
webinar tomorrow, Sep 29: We'll be talking about AI-powered Mars rovers, Mars helicopters, on-orbit satellite servicing, and more!
@spaceroboticist
Robots are frequently the ones exploring our final frontiers. On Thursday, this
@AIforGood
webinar looks at the latest AI and robotic technologies for space science, with HAI faculty member
@drmapavone
moderating.
Finally the first in-person conference in a while!
#KDD2022
!
Looking forward to present our work "Graph Meta-Reinforcement Learning for Transferable Autonomous Mobility-on-Demand" as both Oral Presentation and Poster.
Let's connect! 🧵👇
How can we enable autonomous vehicles to naturally navigate social scenarios, such as merging into traffic?
@iamborisi
tackles this question in our latest blog post, diving into recent trajectory forecasting methods for autonomous driving.
New paper on arXiv! In it, we present Trajectron++, a state-of-the-art extension of our prior multi-agent trajectory forecasting framework that incorporates agent dynamics and heterogeneous input data (e.g., maps). Check out the code here:
We're baaaack! Broadcasting this Friday 3PM EST (12PM PST) THIS FRIDAY "Skydio Autonomy: Research in Robust Visual Navigation & Real-Time 3D Reconstruction" w/
@adampbry
& Hayk Martiros from
@skydiohq
I look forward to Chad Jenkins (
@odestcj
)'s talk tomorrow, March 12th at 3pm EST on "That Ain’t Right: AI Mistakes and Black Lives"
@RoboticsSeminar
#Robotics
More info here:
Numerical experiments show that our algorithm (with simulated annealing) can outperform Gurobi on maximum clique instances, and we expect significant further improvements given a truly quantum Ising solver (3/3)
Our work on human intent prediction is one of the top 3 models in the nuScenes prediction challenge! The code can be found here
@iamborisi
@jay_chakravarty
Our Trajectron++ work with Tim Salzmann,
@jay_chakravarty
,
@MarcoPavoneSU
,
@StanfordASL
is one of the top 3 models in the nuScenes prediction challenge on 3 metrics!! Check out the code at and the prediction challenge leaderboard at
About to present the Trajectron, our work on multi-agent trajectory forecasting at
#iccv2019
! If you're here, please stop by poster
#109
this afternoon! If not, feel free to read the paper at
I am co-organizing the AI Safety Session tomorrow at 1-3pm. Trying to answer "What does safety mean and what should we do about it?" Speakers:
@MarcoPavoneSU
,
@EmmaBrunskill
, James Zou, Tino Cuellar, Moses Charikar, Ritchie Lee, Maxime Bouton, and Erika Strandberg.
Specifically, we designed a hybrid quantum-classical algorithm that (1) uses quantum Ising solvers as a primitive with limited requirements on/knowledge of their optimality guarantees and (2) has polynomial complexity in the classical portions of the algorithm. (2/3)