Dhruv Shah Profile Banner
Dhruv Shah Profile
Dhruv Shah

@shahdhruv_

Followers
3,200
Following
1,299
Media
94
Statuses
792

robot whisperer @GoogleDeepMind | he/him

Berkeley, CA
Joined April 2012
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@shahdhruv_
Dhruv Shah
4 months
PhD life peaked during my last official week as a @berkeley_ai student My research on scaling cross-embodiment learning and robot foundation models won TWO Best Conference Paper Awards at #ICRA2024 🏆🏆 Kudos to @ajaysridhar0 @CatGlossop @svlevine & OXE collaborators! #PhDone ?
Tweet media one
Tweet media two
Tweet media three
Tweet media four
31
5
422
@shahdhruv_
Dhruv Shah
5 days
Excited to share that I will be joining @Princeton as an Assistant Professor in ECE & Robotics next academic year! 🐯🤖 I am recruiting PhD students for the upcoming admissions cycle. If you are interested in working with me, please consider applying.
101
47
808
@shahdhruv_
Dhruv Shah
2 months
I “defended” my thesis earlier today — super grateful to @svlevine and everyone at @berkeley_ai for their support through the last 5 years! 🐻 Excited to be joining @GoogleDeepMind and continue the quest for bigger, better, smarter robot brains 🤖🧠
Tweet media one
58
10
505
@shahdhruv_
Dhruv Shah
1 year
Excited to share our attempt at a general-purpose "foundation model" for visual navigation, with capabilities that generalize in zero-shot, and that can serve as a backbone for efficient downstream adaptation. Check out @svlevine 's 🧵below:
@svlevine
Sergey Levine
1 year
We developed a new navigation model that can be trained on many robots and provides a general initialization for a wide range of downstream navigational tasks: ViNT (Visual Navigation Transformer) is a general-purpose navigational foundation model: 🧵👇
Tweet media one
5
70
362
3
45
216
@shahdhruv_
Dhruv Shah
1 year
We just open-sourced the training and deployment code for ViNT, along with model checkpoints. Try it out on your own robot at We will also be doing a live robot demo @corl_conf #CoRL2023 in Atlanta! Come say hi to our robots 🤖
@shahdhruv_
Dhruv Shah
1 year
Excited to share our attempt at a general-purpose "foundation model" for visual navigation, with capabilities that generalize in zero-shot, and that can serve as a backbone for efficient downstream adaptation. Check out @svlevine 's 🧵below:
3
45
216
3
42
174
@shahdhruv_
Dhruv Shah
4 months
I’m supposed to be graduating this week 🎓 But instead, I’ll be at #ICRA2024 in beautiful Japan all week, presenting an award finalist talk on Tue and at a workshop on Friday! Come find me / DM to chat all things robot learning, job market, veggie food @ Japan and karaoke 🍜🍵🎤
Tweet media one
10
1
160
@shahdhruv_
Dhruv Shah
11 months
Visual Nav Transformer 🤝 Diffusion Policy Works really well and ready for deployment on your robot today! We will also be demoing this @corl_conf 🤖 Videos, code and checkpoints: Work led by @ajaysridhar0 in collaboration with @CatGlossop @svlevine
@svlevine
Sergey Levine
11 months
ViNT (Visual Nav Transformer) now has a diffusion decoder, which enables some cool new capabilities! We call it NoMaD, and it can explore new environments, control different robots, and seek out goals. If you want an off-the-shelf navigation foundation model, check it out! A 🧵👇
1
61
397
3
21
133
@shahdhruv_
Dhruv Shah
6 years
@guykirkwood @elonmusk @xkcdComic I mean he did send a mannequin to space in a cherry red electric roadster. Go easy on the chap!
0
0
118
@shahdhruv_
Dhruv Shah
11 months
I’ll be in Atlanta for #CoRL2023 with new papers, robots, and an engaging workshop! Also thrilled to share that I’m on the job market, looking for tenure-track & industry research positions focused on robot learning and embodied AI. Would love to chat about potential roles 🧵:
1
10
120
@shahdhruv_
Dhruv Shah
2 years
I scraped OpenReview to generate the @corl_conf review distribution so you don’t have to. #CoRL2022
Tweet media one
4
5
85
@shahdhruv_
Dhruv Shah
1 year
Announcing the 6th Robot Learning Workshop @NeurIPSConf on Pretraining, Fine-Tuning, and Generalization with Large Scale Models. #NeurIPS2023 CfP: Don't like your #CoRL2023 reviews? Love them? We welcome your contributions either way 🫶
1
12
83
@shahdhruv_
Dhruv Shah
1 year
Super excited to be in London next week for #ICRA2023 , presenting some exciting research works, and meeting the community! I'll be presenting 3 recent projects with my collaborators, and organizing a workshop on Friday. If you're around and want to meet, come say hi/DM! 🧵:
Tweet media one
3
10
75
@shahdhruv_
Dhruv Shah
2 years
A simple interface to remotely teleop your robot over the internet: For days when you don't feel like going into lab but need to get work done. Works on any ROS-based robot and from *anywhere*, super lightweight. #Robotics #OpenSource @rosorg
0
14
69
@shahdhruv_
Dhruv Shah
2 years
Announcing the Workshop on Language and Robot Learning at @corl_conf #CoRL2022 , Dec 15🤖 Exciting lineup of speakers from the robotics, ML and NLP communities to discuss the present and future of language in robot learning! Inviting papers, due Oct 28📅
Tweet media one
3
26
69
@shahdhruv_
Dhruv Shah
3 years
New blog post on making robots physically explore real world spaces, so you can invite them home for the holidays! I’ll be presenting this work as an Oral @corl_conf in London on Tuesday. If you’re attending, come say hi! #robotics #CoRL2021
@svlevine
Sergey Levine
3 years
Check out @_prieuredesion 's blog post on RECON: a robot that learns to search for goals in new ens using 10s of hours of offline data! RECON dynamically builds "mental maps" of new environments. Dhruv will give a long pres. about RECON @corl_conf nxt wk!
1
20
63
1
22
67
@shahdhruv_
Dhruv Shah
2 years
I'll be presenting LM-Nav at the evening poster session today at @corl_conf : 4pm in the poster lobby outside FPAA. Come find me! Videos, code and more here:
@svlevine
Sergey Levine
2 years
Can we get robots to follow language directions without any data that has both nav trajectories and language? In LM-Nav, we use large pretrained language models, language-vision models, and (non-lang) navigation models to enable this in zero shot! Thread:
5
70
361
2
10
64
@shahdhruv_
Dhruv Shah
3 years
We're making strides towards truly "in-the-wild" robotic learning systems that can operate with no human intervention. RECON leverages the strengths of latent goals models and topo maps to perform rapid goal-directed exploration in unstructured real-world envs with no supervision
@svlevine
Sergey Levine
3 years
Can robots navigate new open-world environments entirely with learned models? RECON does this with latent goal models. "Run 1": search a never-before-seen environment, and build a "mental map." "Run 2": use this mental map to quickly reach goals 🧵>
3
40
172
1
10
64
@shahdhruv_
Dhruv Shah
2 years
I had a great time at #RSS2022 this week, incredibly grateful to the organizers for putting up such a wholesome show @RoboticsSciSys ! Presented ViKiNG, which was a Best Systems Paper Finalist! #LDoD workshop: talks and panel on YT
Tweet media one
Tweet media two
Tweet media three
Tweet media four
2
6
57
@shahdhruv_
Dhruv Shah
2 years
Waking up to exciting news: ViKiNG was accepted to #RSS2022 @RoboticsSciSys ! Looks like it’s an east coast summer for roboticists 🤖
@svlevine
Sergey Levine
3 years
How can we get a robot to read a map? With contrastive learning, goal-conditioned RL, and topological planning, ViKiNG can navigate to destinations more than a km away with any interventions! Drives over roads, cuts across fields, etc.: A thread:
2
26
159
1
7
54
@shahdhruv_
Dhruv Shah
3 years
Contrastive learning + GCRL teaches robots the lost art of map reading, enabling kilometer-scale visual navigation without interventions. We deploy the robot in suburban neighborhoods, Berkeley hills and even take it hiking! Great summary 🧵 by @svlevine
@svlevine
Sergey Levine
3 years
How can we get a robot to read a map? With contrastive learning, goal-conditioned RL, and topological planning, ViKiNG can navigate to destinations more than a km away with any interventions! Drives over roads, cuts across fields, etc.: A thread:
2
26
159
1
11
50
@shahdhruv_
Dhruv Shah
4 months
We’ll be presenting NoMaD today at the Awards track at #ICRA2024 , where it’s nominated for 3 Best Paper awards!! If you missed the talk yesterday, come find @ajaysridhar0 and me at the 13:30 poster session. Good luck navigating the maze
@svlevine
Sergey Levine
11 months
ViNT (Visual Nav Transformer) now has a diffusion decoder, which enables some cool new capabilities! We call it NoMaD, and it can explore new environments, control different robots, and seek out goals. If you want an off-the-shelf navigation foundation model, check it out! A 🧵👇
1
61
397
0
4
48
@shahdhruv_
Dhruv Shah
4 years
Excited to share what I've been working on for the past few months! We teach robots to reach arbitrary goals that you can specify as images from a phone camera! This versatility lets it do cool things -- like delivering pizza or patrolling a campus. @svlevine tweets details ->
@svlevine
Sergey Levine
4 years
RL enables robots to navigate real-world environments, with diverse visually indicated goals: w/ @_prieuredesion , B. Eysenbach, G. Kahn, @nick_rhinehart paper: video: Thread below ->
1
38
184
0
5
47
@shahdhruv_
Dhruv Shah
3 years
VFS was accepted to @iclr_conf #ICLR2022 !! The ICLR rebuttal process remains the most productive and effective review cycles in the game and it is extremely satisfying as an author and reviewer to collectively improve the quality of submissions.
@svlevine
Sergey Levine
3 years
Value function spaces (VFS) uses low-level primitives to form a state representation in terms of their "affordances" - the value functions of the primitives serve as the state. This turns out to really improve generalization in hierarchical RL! Short 🧵>
Tweet media one
5
27
159
1
4
44
@shahdhruv_
Dhruv Shah
9 months
Afficianados of robot learning: join us in Hall B2 at #NeurIPS2023 for some cutting-edge talks, posters, a spicy debate, and live robot demos! The robots are here, are you? We also have some GPUs for a "Spicy Question of the Day Prize" 🌶️, don't miss out
0
6
42
@shahdhruv_
Dhruv Shah
3 years
RECON accepted as an Oral Talk at @corl_conf 2021!! What are the odds we actually get a live audience in London? 🤞🏼
@svlevine
Sergey Levine
3 years
Can robots navigate new open-world environments entirely with learned models? RECON does this with latent goal models. "Run 1": search a never-before-seen environment, and build a "mental map." "Run 2": use this mental map to quickly reach goals 🧵>
3
40
172
1
8
42
@shahdhruv_
Dhruv Shah
6 days
Yet another year, yet another @ieee_ras_icra PaperCept server crash. Happy ICRA (not a) deadline to those who celebrate :)
Tweet media one
2
3
45
@shahdhruv_
Dhruv Shah
2 years
New video from @twominutepapers features our recent research on zero-shot instruction following with real robots! Joint work with @berkeley_ai @GoogleAI @blazejosinski @brian_ichter @svlevine Check out our paper for more:
3
8
38
@shahdhruv_
Dhruv Shah
4 years
“Virtual” socials have come a long way since the start of the pandemic and is a stellar example of what they can be! Karaoke, conference rooms, game rooms, photo booths,... The @berkeley_ai social was the most badass virtual party EVER! Kudos to the team
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
2
32
@shahdhruv_
Dhruv Shah
2 years
On Tuesday, I’m stoked to be presenting ViKiNG — which has been nominated for the Best Systems Paper award — at the Long Talk Session 3! Please stop by my talk or poster later that evening. Joint work with @svlevine @berkeley_ai @RoboticsSciSys
@svlevine
Sergey Levine
3 years
How can we get a robot to read a map? With contrastive learning, goal-conditioned RL, and topological planning, ViKiNG can navigate to destinations more than a km away with any interventions! Drives over roads, cuts across fields, etc.: A thread:
2
26
159
2
7
32
@shahdhruv_
Dhruv Shah
11 months
On Tuesday, we'll be presenting FastRLAP at the evening poster session. We make an RC Car go brrr, pixels-to-actions, in under 20 minutes of real-world practice! Work w/ @KyleStachowicz Arjun Bhorkar @ikostrikov @svlevine
@svlevine
Sergey Levine
1 year
Can we use end-to-end RL to learn to race from images in just 10-20 min? FastRLAP builds on RLPD and offline RL pretraining to learn to race both indoors and outdoors in under an hour, matching a human FPV driver (i.e., the first author...): Thread:
5
61
295
1
1
31
@shahdhruv_
Dhruv Shah
1 year
On Monday, I'll present FastRLAP at the Pretraining for Robotics workshop. #ICRA2023 We make an RC Car go brrr, pixels-to-actions, in under 20 minutes of real-world practice! Work co-led with @KyleStachowicz and Arjun Bhorkar.
@svlevine
Sergey Levine
1 year
Can we use end-to-end RL to learn to race from images in just 10-20 min? FastRLAP builds on RLPD and offline RL pretraining to learn to race both indoors and outdoors in under an hour, matching a human FPV driver (i.e., the first author...): Thread:
5
61
295
2
2
29
@shahdhruv_
Dhruv Shah
10 months
I’ll be at #NeurIPS all week with new research, robot demos, and an exciting workshop on robot learning! Come find me/reach out to chat about all things robotics, embodied reasoning, and vegetarian food in NOLA 🥗 Here’s where you can find me:
Tweet media one
1
1
29
@shahdhruv_
Dhruv Shah
3 years
VFS is a simple, yet effective, way to obtain skill-centric representations that really helps long-horizon reasoning and generalization in HRL. Check out this great summary thread by @svlevine . Work done during my internship @GoogleAI with @brian_ichter , @alexttoshev
@svlevine
Sergey Levine
3 years
Value function spaces (VFS) uses low-level primitives to form a state representation in terms of their "affordances" - the value functions of the primitives serve as the state. This turns out to really improve generalization in hierarchical RL! Short 🧵>
Tweet media one
5
27
159
1
7
28
@shahdhruv_
Dhruv Shah
2 years
Had a great time in Auckland @corl_conf over the past week! Big thanks to the organizers for the wonderful conference 🙂 Presented LM-Nav and ReViND Great discussions @ LangRob workshop (videos coming soon)
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
2
28
@shahdhruv_
Dhruv Shah
9 months
I'm excited to be speaking at the ML4AD symposium (colocated with #NeurIPS2023 ) at noon! Stop by if you're interested
@Waymo
Waymo
10 months
We’re thrilled to sponsor this year’s ML for Autonomous Driving Symposium on December 14. ML4AD 2023 will see researchers, industry experts, and practitioners come together to redefine the future of autonomous driving technologies. Join us in New Orleans!
Tweet media one
1
15
60
0
2
26
@shahdhruv_
Dhruv Shah
11 months
At #CoRL2023 , the lines are long but the speakers are strong 💪🏼 Come join us in Sequoia 2 and
@oier_mees
Oier Mees
11 months
Join us for the 2nd edition of the #LangRob workshop at #CoRL2023 in vibrant Atlanta! Get ready for an unforgettable day with an all-star ensemble of speakers and two spicy panels that will ignite your passion for language and robotics! 🔥🤖 P.S. Guess who wrote this tweet 😉
Tweet media one
0
11
67
0
2
26
@shahdhruv_
Dhruv Shah
11 months
On Wednesday, I will be presenting ViNT as an oral talk in the morning session 🥱 We show that cross-embodiment training generalizes well in zero-shot and can be adapted to various downstream tasks.
@svlevine
Sergey Levine
1 year
We developed a new navigation model that can be trained on many robots and provides a general initialization for a wide range of downstream navigational tasks: ViNT (Visual Navigation Transformer) is a general-purpose navigational foundation model: 🧵👇
Tweet media one
5
70
362
1
2
23
@shahdhruv_
Dhruv Shah
2 years
Excited to share preliminary generations from our new photorealistic text-to-image diffusion model for "Berkeley in the snow".
@Connorstp
it’s me, csp
2 years
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
12
105
1
0
23
@shahdhruv_
Dhruv Shah
1 year
On Wednesday, I'll present GNM: a pre-trained embodiment-agnostic navigation model (with public checkpoints!) that can drive any robot! #ICRA2023
@svlevine
Sergey Levine
2 years
We're releasing our code for "driving any robot", so you can also try driving your robot using the general navigation model (GNM): Code goes with the GNM paper: Should work for locobot, hopefully convenient to hook up to any robot
14
176
855
1
3
23
@shahdhruv_
Dhruv Shah
3 years
Very interesting and engaging tutorial on Social Reinforcement Learning for your robots by @natashajaques at #CoRL2021 @corl_conf It’s being streamed on YouTube if you wanna join:
Tweet media one
0
3
22
@shahdhruv_
Dhruv Shah
2 years
Some late night encouragement from @weights_biases to keep churning experiments for your sweet sweet papers!
1
0
21
@shahdhruv_
Dhruv Shah
2 years
The @ieee_ras_icra Workshop on Robotic Perception and Mapping seems like such a throwback to busy conference days — 100s of attendees in a room! Great panel @lucacarlone1 @AjdDavison @fdellaert #ICRA2022
Tweet media one
0
1
22
@shahdhruv_
Dhruv Shah
3 years
Excited to finally be attending a physical conference and meet friends in-person at #CoRL2021 . Lovely conference venue, looking forward to the exciting talks. If you’re attending and wanna chat, let’s catch up! DMs open.
Tweet media one
Tweet media two
0
1
21
@shahdhruv_
Dhruv Shah
2 years
Cool new work combining CLIP and a 3D model for zero-shot 3D reasoning!! Lots of exciting progress using CLIP as a reliable language interface for vision and robotics tasks. We found CLIP to be very useful to ground landmarks for robotic navigation too:
@SongShuran
Shuran Song
2 years
Semantic abstraction -- give CLIP new 3D reasoning capabilities, so your robots can find that “ rapid test behind the Harry Potter book.” 😉 w. Huy Ha
0
29
164
2
1
20
@shahdhruv_
Dhruv Shah
3 years
RECON was featured on Computer Vision News @RSIPvision : I also presented RECON at @corl_conf in London last month, talk video here: Videos and dataset @
@svlevine
Sergey Levine
3 years
Can robots navigate new open-world environments entirely with learned models? RECON does this with latent goal models. "Run 1": search a never-before-seen environment, and build a "mental map." "Run 2": use this mental map to quickly reach goals 🧵>
3
40
172
0
5
20
@shahdhruv_
Dhruv Shah
11 months
Also on Wednesday, we'll be presenting LFG at the evening poster session. We use CoT and a novel scoring method for LLMs to construct narratives and guide the robot unseen environments in search of a goal.
@svlevine
Sergey Levine
11 months
Can LLMs help robots navigate? It's hard for LLMs to just *tell* the robot what to do, but they can provide great heuristics for navigation by coming up with guesses. With LFG, we ask the LLM to come up with a "story" about which choices are better, then use it w/ planning. 🧵👇
4
53
313
0
4
19
@shahdhruv_
Dhruv Shah
2 years
Excited to be in Philly for @ieee_ras_icra ! Looking forward to catching up with friends and checking out exciting new research in person after a long hiatus. Please reach out if you’re around and wanna chat 😁
Tweet media one
0
0
19
@shahdhruv_
Dhruv Shah
2 years
Excited to be in New York City for @RoboticsSciSys #RSS2022 ! On Monday, we’ll be organizing this workshop on Learning from Diverse, Offline Data with cutting edge papers, speakers and an in-person panel! Please join us if you’re in town, or virtually!
@siddkaramcheti
Siddharth Karamcheti
2 years
Diverse, representative data is becoming increasingly important for building generalizable robotic systems. We're organizing the Workshop on Learning from Diverse, Offline Data (L-DOD) at RSS 2022 (NYC/hybrid) to come together and discuss this!
Tweet media one
2
24
86
3
1
19
@shahdhruv_
Dhruv Shah
2 years
Have a recent @NeurIPSConf /WIP draft on offline learning, dataset curation, benchmarking, learning from multimodal data or related topics? Please consider submitting to our RSS workshop for quick feedback Paper deadline now extended to *May 27* ⏳
Tweet media one
0
9
17
@shahdhruv_
Dhruv Shah
5 years
Dr. Signe Redfield on the definition of robotics as a field and rise of a Kuhnian scientific revolution... Read more at #ICRA2019 #RoboticsDebates
Tweet media one
Tweet media two
Tweet media three
0
3
18
@shahdhruv_
Dhruv Shah
5 years
@TrackCityChick Very unfortunate, exacerbated for international students with the sky-high living expenses in Bay area. Very glad that Berkeley provides a semester's stipend in advance (~start of classes) for grad students.! This should totally be the norm
0
1
18
@shahdhruv_
Dhruv Shah
2 years
TIL: You can anonymize a @github code repo for double-blind peer review with this neat tool by @thodurieux It comes with a navigator and also has some basic anonymization options like removing links or specific words. Really cool!
0
1
17
@shahdhruv_
Dhruv Shah
11 months
@jeremyphoward @simonw I would've thought they are A/B testing different variants, but trying out the (hopefully more stable) API, this is not a new thing. The original release model (gpt-4-0314) also seems to know about the slap...
Tweet media one
3
2
16
@shahdhruv_
Dhruv Shah
2 years
It’s a full house! Our @RoboticsSciSys Workshop on Learning from Diverse, Offline Data is happening now at Mudd 545, and online at
Tweet media one
Tweet media two
Tweet media three
@siddkaramcheti
Siddharth Karamcheti
2 years
#LDOD is kicking off in ~30 minutes; join our free-to-view livestream here: See y'all soon!
0
0
7
0
4
17
@shahdhruv_
Dhruv Shah
1 year
On Tuesday, we'll present ExAug at the 3PM poster session. #ICRA2023 Work led by @noriakihirose .
@svlevine
Sergey Levine
2 years
Experience augmentation (ExAug) uses 3D transformations to augment data from different robots to imagine what other robots would do in similar situations. This allows training policies that generalize across robot configs (size, camera placement): Thread:
3
24
130
1
1
15
@shahdhruv_
Dhruv Shah
2 years
LangRob workshop happening now at #CoRL2022 in ENG building, room 401! Pheedloop and stream for virtual attendees:
Tweet media one
@shahdhruv_
Dhruv Shah
2 years
Announcing the Workshop on Language and Robot Learning at @corl_conf #CoRL2022 , Dec 15🤖 Exciting lineup of speakers from the robotics, ML and NLP communities to discuss the present and future of language in robot learning! Inviting papers, due Oct 28📅
Tweet media one
3
26
69
0
3
13
@shahdhruv_
Dhruv Shah
2 years
Reminder -- papers are due AoE tonight!! Please consider sharing your new research with the broader robotics and machine learning community at @RoboticsSciSys 2022 in NYC or remotely 🤖
@shahdhruv_
Dhruv Shah
2 years
Have a recent @NeurIPSConf /WIP draft on offline learning, dataset curation, benchmarking, learning from multimodal data or related topics? Please consider submitting to our RSS workshop for quick feedback Paper deadline now extended to *May 27* ⏳
Tweet media one
0
9
17
1
4
15
@shahdhruv_
Dhruv Shah
2 years
Are you an early-stage researcher (grad student/postdoc) interested in a fireside chat with one of our workshop speakers? We're inviting signups for a 1:1 discussion Please email ldod_rss2022 @googlegroups .com with a bit about yourself and the speaker you'd like to chat with!
@siddkaramcheti
Siddharth Karamcheti
2 years
Diverse, representative data is becoming increasingly important for building generalizable robotic systems. We're organizing the Workshop on Learning from Diverse, Offline Data (L-DOD) at RSS 2022 (NYC/hybrid) to come together and discuss this!
Tweet media one
2
24
86
3
7
15
@shahdhruv_
Dhruv Shah
10 months
@sea_snell you did it before it was cool
Tweet media one
0
0
14
@shahdhruv_
Dhruv Shah
5 years
Unknowingly kicked off the #icra2019ScavengerHunt earlier today at this beautiful place! #icra2019MontRoyal #TeamWookie #icra2019
Tweet media one
Tweet media two
Tweet media three
1
3
13
@shahdhruv_
Dhruv Shah
3 years
@CSProfKGD The Berkeley DL course generally gets a huge undergraduate cohort: Prev offering:
0
1
12
@shahdhruv_
Dhruv Shah
2 years
@ammaryh92 @jeremyphoward @__mharrison__ @rasbt @svpino I love Jupyter Lab but the real champ is VSCode + Jupyter notebook extension — it’s like Lab, but much more customizable and feels like a notebook inside of your favorite editor with keybindings. Bonus: works with @OpenAI @Github Copilot!
1
1
12
@shahdhruv_
Dhruv Shah
5 years
Action-packed day at Bay Area Robotics Symposium 2019 @UCBerkeley @berkeley_ai with exciting research from Berkeley, @Stanford , @ucsc and industry :D Props to Mark Mueller and @DorsaSadigh for organizing 🤖
Tweet media one
0
1
13
@shahdhruv_
Dhruv Shah
1 year
@xuanalogue @jeremyphoward This is a shame! My collaborators and I have done a lot of work that leverages the logprobs in a probabilistic planning framework and found it very useful, I guess that’s why you shouldn’t use closed models for research…
1
0
13
@shahdhruv_
Dhruv Shah
2 years
@hardmaru To be fair, that’s probably just the cost to train the final/released model, and does not include the compute used in tuning hyperparameters and failed experiments? The overall $$ of the project would likely be at least an order higher than that of a final model.
0
0
12
@shahdhruv_
Dhruv Shah
5 years
It's unbelievable what the Ocean One achieved with neat research and remarkable engineering efforts! Oussama Khatib on the need for compliant robots and the story behind the Marseille shipwreck recovery @StanfordAILab #ICRA2019 Interesting video:
Tweet media one
Tweet media two
Tweet media three
0
1
12
@shahdhruv_
Dhruv Shah
2 years
Excited to share our latest research on customizing learned navigation behaviors by combining offline RL with topological graphs -- ReViND. I'll be presenting ReViND at the 11am oral session today @corl_conf . Please join! Videos, code and more:
@svlevine
Sergey Levine
2 years
Offline RL with large navigation datasets can learn to drive real-world mobile robots while accounting for objectives (staying on grass, on paths, etc.). We'll present ReVIND, our offline RL + graph-based navigational method at CoRL 2022 tomorrow. Thread:
1
30
95
0
0
13
@shahdhruv_
Dhruv Shah
3 years
At the @Tesla AI Day event today and there’s a Cybertruck to greet us at the gate. Looking forward to what’s waiting inside!
Tweet media one
1
0
12
@shahdhruv_
Dhruv Shah
2 years
@andreasklinger @ajayj_ had this really cool CVPR paper that does some version of this:
0
0
12
@shahdhruv_
Dhruv Shah
9 months
@SOTA_kke quadrupeds unite
0
1
11
@shahdhruv_
Dhruv Shah
2 years
@ikostrikov @OpenAI jaxgpt coming soon
0
0
11
@shahdhruv_
Dhruv Shah
1 year
Deadline for submitting papers and demo proposals now EXTENDED to **next** Friday, Oct 6 AoE!
@shahdhruv_
Dhruv Shah
1 year
Announcing the 6th Robot Learning Workshop @NeurIPSConf on Pretraining, Fine-Tuning, and Generalization with Large Scale Models. #NeurIPS2023 CfP: Don't like your #CoRL2023 reviews? Love them? We welcome your contributions either way 🫶
1
12
83
0
3
11
@shahdhruv_
Dhruv Shah
1 year
TIL: @GoogleAI Bard works quite well with images Pretty impressive!
Tweet media one
0
1
10
@shahdhruv_
Dhruv Shah
5 years
The number of passengers with poster tubes on this flight from Frankfurt to Montreal is too high..! Coincidence or #ToICRA2019 ? @icra2019 @ieee_ras_icra
0
0
10
@shahdhruv_
Dhruv Shah
5 years
@maththrills moderating the most scintillating debate of #ICRA2019 : "The pervasiveness of deep learning is an impediment to gaining scientific insights into robotics problems" It's a full-house! @angelaschoellig , Nick Roy @MIT_CSAIL , Ryan Gariepy @clearpathrobots and Oliver Brock
Tweet media one
2
3
10
@shahdhruv_
Dhruv Shah
4 years
Looking forward to my first "virtual" @iclr_conf ! The interface looks very clean and well-designed, great effort pioneering this @srush_nlp and co. 👏
@srush_nlp
Sasha Rush
4 years
1/ Spent the last couple weeks in quarantine obsessively coding a website for Virtual ICLR with @hen_str . We wanted to build something that was fun to browse, async first, and feels alive.
39
399
2K
2
0
9
@shahdhruv_
Dhruv Shah
2 years
@andyzengtweets and @xf1280 from @GoogleAI on language as a glue for intelligent machines and a live demo of their PaLM-SayCan system! (9/12)
Tweet media one
1
2
9
@shahdhruv_
Dhruv Shah
3 years
Some great demos of exploring unseen cafeterias and fire stations under different seasons and lighting on the project page: Video: Work with amazing collaborators @berkeley_ai : @ben_eysenbach @nick_rhinehart and @svlevine
0
1
9
@shahdhruv_
Dhruv Shah
6 years
@mikb0b Libraries on top of Matplotlib usually work well enough. Once in a while, I've used Inkscape/online software for a particular graphic I wanted. PS: I challenge everyone to use Matplotlib+XKCD in a paper
Tweet media one
0
0
8
@shahdhruv_
Dhruv Shah
6 years
A good friend introduced me to @MathpixApp today. Works way beyond my expectations! Biggest thing to happen to me since starting TeXing... Highly recommend to everyone #phdlife #AcademicTwitter
Tweet media one
1
3
8
@shahdhruv_
Dhruv Shah
5 years
Interesting (and important) ideas on the cycle of bias and the need for inclusiveness at venues like #ICRA2019 by Karime Pereida and Melissa Greeff @utiasSTARS #RoboticsDebates #robotics #diversitymatters
Tweet media one
0
2
8
@shahdhruv_
Dhruv Shah
3 months
@ehsanik Not at CVPR, but loving this! We need this at every conference 🙃
0
0
7
@shahdhruv_
Dhruv Shah
5 years
#ICRA2019 Milestone Award for best paper from 20 years ago to Stev n LaValle and James Kuffner! (The world needs better telepresence tools 🤷)
Tweet media one
1
0
8
@shahdhruv_
Dhruv Shah
4 years
We'll be presenting our spotlight talk on getting robots to learn in the real-world without hand-engineered resets, rewards and state information at ICLR 2020! Tune in at 10PM tonight or 5AM tomorrow PDT to know more. Blogpost: @iclr_conf @berkeley_ai
@svlevine
Sergey Levine
4 years
Check out our ICLR spotlight: Ingredients of Real-World Robotic Reinforcement Learning! How can we set up robots to learn with RL, without manual engineering for resets, rewards, or vision? Talk Paper Poster
1
25
101
1
0
8
@shahdhruv_
Dhruv Shah
5 years
Metaphors converging to real advice on embracing failure without fear #ICRA2019
Tweet media one
Tweet media two
0
0
7
@shahdhruv_
Dhruv Shah
6 years
Excited to share that our work on swarm aggregation without communication was accepted to RA-Letters and for presentation at #ICRA 2019. Very satisfied with the RAL review process and quality of feedback! Work done @iitbombay Early-access: #robotics
1
1
7
@shahdhruv_
Dhruv Shah
5 years
"If a startup is a marathon, a robotics startup is a decathlon" ~ Ryan Gariepy @clearpathrobots on commercializing robotics #ICRA2019 #startups
Tweet media one
0
0
7
@shahdhruv_
Dhruv Shah
1 year
We have extended the submission deadline to Sunday **October 8**! We look forward to your amazing robots :)
@xiao_ted
Ted Xiao
1 year
Announcing the 2nd Workshop on Language and Robot Learning at #CoRL2023 on November 6th, 2023! This year's theme is "Language as Grounding". Featuring a great speaker lineup and two panels! Website: CfP: Deadline: October 1
Tweet media one
2
13
76
0
0
7
@shahdhruv_
Dhruv Shah
2 years
Talks and panel discussion from our @corlconf workshop on Language and Robot Learning #CoRL2022 are now live! 🧵 below:
@oier_mees
Oier Mees
2 years
Check out the recording of our workshop on Language and Robotics @ #CoRL2022 with fantastic speakers @jacobandreas @andyzengtweets @cmat @jeffclune @ybisk @jackayline Dieter Fox, Jean Oh, Alane Suhr, Nakul Gopalan! @xiao_ted @shahdhruv_ @mohito1905
0
12
36
1
0
7
@shahdhruv_
Dhruv Shah
2 years
@shahdhruv_
Dhruv Shah
2 years
I scraped OpenReview to generate the @corl_conf review distribution so you don’t have to. #CoRL2022
Tweet media one
4
5
85
1
2
7
@shahdhruv_
Dhruv Shah
2 years
@maththrills @QUTRobotics Can confirm that this policy generalizes across oceans.
Tweet media one
1
0
7
@shahdhruv_
Dhruv Shah
5 years
There's a lot to say about what @OpenAI GPT-2 does and does not get right, but this is a piece of sheer magnificence! The style and context has been continued very well; I particularly like how the April-May-June continuity shapes the poem. #NLProc #AcademicTwitter
Tweet media one
0
1
7
@shahdhruv_
Dhruv Shah
9 months
@tonyzzhao @zipengfu @chelseabfinn Very impressive demos and autonomous policies! Congratulations to all of you :)
1
0
4
@shahdhruv_
Dhruv Shah
4 years
Did RSS deadline come an hour early? I am unable to edit my submission on CMT... @RoboticsSciSys
2
0
6
@shahdhruv_
Dhruv Shah
5 years
Sneaking in a peaceful morning ahead of another busy day at #icra2019 #icra2019scavengerhunt #icra2019basilica #TeamWookie
Tweet media one
Tweet media two
0
1
6
@shahdhruv_
Dhruv Shah
1 year
@SachaMori I really like the embedding visualizations of the environment topology! Super cool
Tweet media one
0
2
5