
Abhinav Gupta
@gupta_abhinav_
Followers
1K
Following
44
Media
0
Statuses
52
Joined February 2024
We are finally out with our results -- notice how general and robust our action policies are (walking in forest, climbing steep slippery hills and people snatching objects during manipulation). But my favorite part is pulling the robot while climbing on stairs -- climbing stairs.
Modern AI is confined to the digital world. At Skild AI, we are building towards AGI for the real world, unconstrained by robot type or task — a single, omni-bodied brain. Today, we are sharing our journey, starting with early milestones, with more to come in the weeks ahead.
4
5
79
RT @SalesforceVC: We spent 7+ months embedded in this world — talking to founders, engineers, and researchers like @drfeifei, @DrJimFan, @p….
0
1
0
RT @ravi_lsvp: Excited to see @skildai unveiling the world most advanced foundation model for robotics!.
0
3
0
Thanks @DrJimFan for raising this too! I believe so many people have shared dancing and kung fu videos to mislead the progress in this space.
This is classic Moravec’s Paradox at play -- things that are easy for humans are often difficult for robots and vice versa!. Also, nicely articulated by @DrJimFan here:
0
0
7
Don't go after the hype. what looks hard is easy and what looks easy is very hard!!. Personally, from what I have seen, this is the most robust and general visual locomotion policy ever.
We’ve all seen humanoid robots doing backflips and dance routines for years. But if you ask them to climb a few stairs in the real world, they stumble!. We took our robot on a walk around town to environments that it hadn’t seen before. Here’s how it works🧵⬇️
5
9
134
RT @jasonyzhang2: Last year, my ring bearer was a Skild robot. Excited to see how far they've come!!
0
10
0
RT @chris_j_paxton: Some really impressive stuff in here, love to see it working on so many different robots as well.
0
8
0
It was pleasure hosting the humanoid hub founder! The real robotics results are live demos. And yes, these results are just the tip of the iceberg of what is coming out :).
I had the pleasure of visiting the Skild lab in Pittsburgh about a month ago. It’s easily one of the most futuristic places I’ve seen, packed with robots everywhere -- each busy learning, testing, or solving customer use cases. Even humanoids - just about every piece of humanoid
1
2
30
RT @TheHumanoidHub: I had the pleasure of visiting the Skild lab in Pittsburgh about a month ago. It’s easily one of the most futuristic p….
0
42
0
RT @RikoSuminoe69: 2025 was the year of digital agents.2026 will be the year of physical agents (minus the scaling).Keep eye on.@clonerobot….
0
1
0
RT @TheHumanoidHub: Skild has been quiet since emerging from stealth in July 2024. They just shared their journey so far, showcasing early….
0
56
0
RT @deedydas: This team of the best robotics researchers in the world are building the universal brain to control any robot on any task. A….
0
28
0
RT @SkildAI: Modern AI is confined to the digital world. At Skild AI, we are building towards AGI for the real world, unconstrained by rob….
0
255
0
It's time! So excited to finally reveal we have been upto tomorrow. A decade of research starting from early Smith Hall Baxter days culminating into this. .
We’ve been building quietly — starting tomorrow, we go live. Here’s a teaser of what we did before Skild AI. It has shaped what’s coming next. 07/29. Stay tuned.
2
6
91
RT @SkildAI: We’ve been building quietly — starting tomorrow, we go live. Here’s a teaser of what we did before Skild AI. It has shaped wh….
0
127
0
Thanks @Vikashplus . funny that arms/hardware was also very similar . as if this is 2018 :).
Robotics 🔄 @Airbnb. 💪How it stated in 2017 (@gupta_abhinav_ @LerrelPinto & team from @CMU_Robotics) . 📈How its going in 2025 (@hausman_k @svlevine @chelseabfinn & team from @physical_int)
1
0
14
RT @Vikashplus: Robotics 🔄 @Airbnb. 💪How it stated in 2017 (@gupta_abhinav_ @LerrelPinto & team from @CMU_Robotics) . 📈How its going in 202….
0
6
0
Collecting robot interaction data for every task we want to deploy a policy in is impractical. We show that it is possible to leverage off-the-shelf video generation models to infer motion cues for robot manipulation of an unseen task in a novel scene! Checkout thread below.
Gen2Act: Casting language-conditioned manipulation as *human video generation* followed by *closed-loop policy execution conditioned on the generated video* enables solving diverse real-world tasks unseen in the robot dataset!. 1/n
1
1
46