RoboPapers
@RoboPapers
Followers
3K
Following
15
Media
171
Statuses
181
@chris_j_paxton, @micoolcho & @DJiafei geeking out weekly with authors of robotics AI papers. On YouTube / X / Spotify / Substack
Joined February 2025
Full episode dropping soon! Geeking out with Jacob Berg on Semantic World Models https://t.co/7r0r5IBgom Co-hosted by @micoolcho @chris_j_paxton
0
5
15
@_wenlixiao Learn more: Project Page: https://t.co/czUGmtxy7d ArXiV: https://t.co/0iWH9yQSwR Thread on X:
What if robots could improve themselves by learning from their own failures in the real-world? Introducing ๐ฃ๐๐ (๐ฃ๐ฟ๐ผ๐ฏ๐ฒ, ๐๐ฒ๐ฎ๐ฟ๐ป, ๐๐ถ๐๐๐ถ๐น๐น) โ a recipe that enables Vision-Language-Action (VLA) models to self-improve for high-precision manipulation tasks. PLD
0
1
2
On their own, vision-language-action models are powerful tools for general robot skills that show impressive generalization. However, they donโt achieve useful levels of reliability on valuable manipulation tasks. @_wenlixiao teaches us one way to achieve this reliability:
1
7
33
Full episode dropping soon! Geeking out with @_wenlixiao on Probe, Learn, Distill: Self-Improving Vision-Language-Action Models with Data Generation via Residual RL https://t.co/czUGmtx0hF Co-hosted by @micoolcho @chris_j_paxton
1
5
31
Full episode dropping soon! Geeking out with @_wenlixiao on Probe, Learn, Distill: Self-Improving Vision-Language-Action Models with Data Generation via Residual RL https://t.co/czUGmtx0hF Co-hosted by @micoolcho @chris_j_paxton
0
6
23
Robotics, as we know, has a data problem. Many workarounds have been proposed, but one of the most important things is just to collect a large amount of real-robot dataโ something very difficult, especially for mobile humanoids. Enter Humanoid Everyday, which provides a large,
1
3
34
Full episode dropping soon! Geeking out with @zhenyuzhao123, Hongyi Jing, Xiawei Liu, @PointsCoder @yuewang314 on Humanoid Everyday: A Comprehensive Robotic Dataset for Open-World Humanoid Manipulation https://t.co/cuj68kR0td Co-hosted by @micoolcho @chris_j_paxton
1
8
16
Full episode dropping soon! Geeking out with @zhenyuzhao123, Hongyi Jing, Xiawei Liu, @PointsCoder @yuewang314 on Humanoid Everyday: A Comprehensive Robotic Dataset for Open-World Humanoid Manipulation https://t.co/cuj68kR0td Co-hosted by @micoolcho @chris_j_paxton
0
6
19
Collecting robot teleoperation data for mobile manipulation is incredibly time consuming, even moreso than collecting teleoperation data for a stationary mobile manipulator. Fortunately, @LawrenceZhu22 and @PranavKuppili have a solution: EMMA, or Egocentric Mobile MAnipulation.
1
4
34
Full episode dropping soon! Geeking out with @LawrenceZhu22 @PranavKuppili on EMMA: Scaling Mobile Manipulation via Egocentric Human Data https://t.co/1ssjFyaqoW Co-hosted by @micoolcho @chris_j_paxton
0
3
11
Robots need to be able to apply pressure and make contact with objects as needed in order to accomplish their tasks. From compliance to working safely around humans to whole-body manipulation of heavy objects, combining force and position control can dramatically expand the
0
11
58
Full episode dropping soon! Geeking out with @LawrenceZhu22 @PranavKuppili on EMMA: Scaling Mobile Manipulation via Egocentric Human Data https://t.co/1ssjFyaqoW Co-hosted by @micoolcho @chris_j_paxton
0
4
13
@Shaofeng_Yin Learn more: This post on Substack: https://t.co/KN4hfcIEWm Project Page: https://t.co/l2ScSmzA5H ArXiV:
arxiv.org
Humanoid loco-manipulation in unstructured environments demands tight integration of egocentric perception and whole-body control. However, existing approaches either depend on external motion...
0
0
0
Robots must often be able to move around and interact with objects in previously-unseen environments to be useful. And the interaction part is important; to do this, they must be able to perceive and interact with the world using onboard sensing. Enter VisualMimic. @Shaofeng_Yin
1
6
20
Full episode dropping soon! Geeking out with @BaoxiongJ on UniFP: Learning a Unified Policy for Position and Force Control in Legged Loco-Manipulation https://t.co/iLEwnVoTjx Co-hosted by @micoolcho @chris_j_paxton
0
5
12
Full episode dropping soon! Geeking out with @BaoxiongJ on UniFP: Learning a Unified Policy for Position and Force Control in Legged Loco-Manipulation https://t.co/iLEwnVoTjx Co-hosted by @micoolcho @chris_j_paxton
0
6
14
To learn more check out these links: Project Page: https://t.co/fTqS3s7fpL ArXiV:
arxiv.org
Humanoid whole-body loco-manipulation promises transformative capabilities for daily service and warehouse tasks. While recent advances in general motion tracking (GMT) have enabled humanoids to...
0
0
1
For robots to be useful, they canโt just dance โ they must be able to physically interact with the world around them. Unfortunately, the sorts of motion tracking policies you see performing dancing or martial arts are not really capable of the kind of precise, forceful
3
12
46
For robots to be useful, they canโt just dance โ they must be able to physically interact with the world around them. Unfortunately, the sorts of motion tracking policies you see performing dancing or martial arts are not really capable of the kind of precise, forceful
3
12
46