
Harish Ravichandar
@h_ravichandar
Followers
802
Following
2K
Media
36
Statuses
327
Robotics and AI Faculty @GeorgiaTech @ICatGT @GTrobotics Structured robot learning for reliable and collaborative robots.
Atlanta, GA
Joined May 2010
It was fun to chat with Kausar about our take on structured robot learning, and how it might offer a good tradeoff between data-driven and classical methods!. I also think we should stop arguing if scaling will work, and recognize false dichotomies! But that's for another time :).
New wine in an old bottle: Why scaling isn’t the only path to robot intelligence. 🍷🤖.@h_ravichandar shows how structured learning builds efficient, reliable robots. He envisions intelligent machines so seamless, they'll feel like magic. Full convo:.
0
0
4
At #RSS2025 WS on multi-robot systems, Shalin Jain will introduce JaxRobotarium, a Jax-based open-source framework for *anyone* to train and *physically* deploy MARL policies in *minutes*!.🕤2:30pm PT.🏢OHE 132.🪧Poster 43.🌐 🔗
0
2
7
At #RSS2025 WCBM, check out Zhaodong Yang's spotlight talk on AsymDex to learn synchronized & asymmetric bimanual dexterity for multi-fingered hands via RL. No fixed hand bases; Zero demonstrations! .🕤9:30am PT.🏢OHE 136.🌐 🔗
0
1
8
RT @siddkaramcheti: Thrilled to share that I'll be starting as an Assistant Professor at Georgia Tech (@ICatGT / @GTrobotics / @mlatgt) in….
0
27
0
🚨We extended the paper submission deadline to April 23rd for our #ICRA2025 workshop! See🧵for details .
📢ICRA workshop on structured robot learning with awesome speakers and an exciting program! . We will discuss current solutions and challenges in improving efficiency, reliability, or transparency in robot learning!. See 🧵for details, including a call for papers.
1
0
8
📢ICRA workshop on structured robot learning with awesome speakers and an exciting program! . We will discuss current solutions and challenges in improving efficiency, reliability, or transparency in robot learning!. See 🧵for details, including a call for papers.
We are excited to announce the 2025 ICRA Workshop on Structured Learning for Efficient, Reliable and Transparent Robots (SRL) happening in Atlanta, GA May 23rd!. Paper submission link: Website:
1
5
15
A super cool video introducing an even cooler ego-centric imitation learning project from Danfei 's lab here at Georgia Tech!.
Prof. Danfei Xu (@danfei_xu) and the Robot Learning and Reasoning Lab (RL2) present EgoMimic. EgoMimic is a full-stack framework that scales robot manipulation through egocentric-view human demonstrations via Project Aria glasses. 🔖Blog post: 🔗Github:
0
1
16
RT @LukasSchaefer96: 📚🧵1/7 It is finally here!! Only one more week until the print release of our textbook “Multi-Agent Reinforcement Learn….
0
19
0
RT @GeorgiaChal: Honored for the early career keynote at #CoRL2024 last week—unforgettable milestone! Grateful to the community for the rec….
0
7
0
RT @jbhuang0604: PSA: The Center for Machine Learning @ml_umd invites applications for 2024 ✨Rising Stars in Machine Learning ✨. Please h….
0
12
0
RT @AjayMandlekar: Please use this updated link to apply (if you haven't submitted an application yet): .
0
3
0
RT @EugeneVinitsky: Recruiting PhD students this year to the EMERGE lab! The unifying theme is seriously scaling our capability to solve re….
0
44
0
RT @BlackInRobotics: Hello all,.Did you know students and postdocs can travel to robotics-related conferences on a travel grant from Black….
0
12
0
Woohoo! I had the privilege of coadvising @MandyXie9 (I basically signed some forms while she did amazing things with our colleagues at NVIDIA). So proud of her and her company!! . Check out their awesome robot lawnmower:
My former student @MandyXie9 started a company to build a robot lawnmower, SLAM-powered of course! You can reserve yours at this kickstarter campaign!
1
0
8
RT @poetdavidwhyte: Yes, it’s still possible not to hold so tightly.to what you think is true, to bend your head.and assume humility beneat….
0
14
0
📢In a new #CoRL2024 paper, we fix a key limitation of our prior work on Koopman-based dexterous manipulation(. Hongyi figured out how to learn flexible visual features for koopman dynamics➡️No need to manually define or have access to ground-truth states!.
#CoRL2024 accepted!🌈. Our work KOROL developed a linear dynamics model using object features that capture key information for robotic manipulation, outperforming models that rely on GT object states. Code:
0
2
20