RoboPapers Profile Banner
RoboPapers Profile
RoboPapers

@RoboPapers

Followers
3K
Following
15
Media
171
Statuses
181

@chris_j_paxton, @micoolcho & @DJiafei geeking out weekly with authors of robotics AI papers. On YouTube / X / Spotify / Substack

Joined February 2025
Don't wanna be here? Send us removal request.
@RoboPapers
RoboPapers
7 hours
Full episode dropping soon! Geeking out with Jacob Berg on Semantic World Models https://t.co/7r0r5IBgom Co-hosted by @micoolcho @chris_j_paxton
0
5
15
@RoboPapers
RoboPapers
10 hours
@_wenlixiao Learn more: Project Page: https://t.co/czUGmtxy7d ArXiV: https://t.co/0iWH9yQSwR Thread on X:
@_wenlixiao
Wenli Xiao ๐Ÿ–๏ธNeurIPS
1 month
What if robots could improve themselves by learning from their own failures in the real-world? Introducing ๐—ฃ๐—Ÿ๐—— (๐—ฃ๐—ฟ๐—ผ๐—ฏ๐—ฒ, ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป, ๐——๐—ถ๐˜€๐˜๐—ถ๐—น๐—น) โ€” a recipe that enables Vision-Language-Action (VLA) models to self-improve for high-precision manipulation tasks. PLD
0
1
2
@RoboPapers
RoboPapers
10 hours
On their own, vision-language-action models are powerful tools for general robot skills that show impressive generalization. However, they donโ€™t achieve useful levels of reliability on valuable manipulation tasks. @_wenlixiao teaches us one way to achieve this reliability:
1
7
33
@RoboPapers
RoboPapers
2 days
Full episode dropping soon! Geeking out with @_wenlixiao on Probe, Learn, Distill: Self-Improving Vision-Language-Action Models with Data Generation via Residual RL https://t.co/czUGmtx0hF Co-hosted by @micoolcho @chris_j_paxton
1
5
31
@RoboPapers
RoboPapers
2 days
Full episode dropping soon! Geeking out with @_wenlixiao on Probe, Learn, Distill: Self-Improving Vision-Language-Action Models with Data Generation via Residual RL https://t.co/czUGmtx0hF Co-hosted by @micoolcho @chris_j_paxton
0
6
23
@RoboPapers
RoboPapers
2 days
Robotics, as we know, has a data problem. Many workarounds have been proposed, but one of the most important things is just to collect a large amount of real-robot dataโ€” something very difficult, especially for mobile humanoids. Enter Humanoid Everyday, which provides a large,
1
3
34
@RoboPapers
RoboPapers
4 days
Full episode dropping soon! Geeking out with @zhenyuzhao123, Hongyi Jing, Xiawei Liu, @PointsCoder @yuewang314 on Humanoid Everyday: A Comprehensive Robotic Dataset for Open-World Humanoid Manipulation https://t.co/cuj68kR0td Co-hosted by @micoolcho @chris_j_paxton
1
8
16
@RoboPapers
RoboPapers
4 days
Full episode dropping soon! Geeking out with @zhenyuzhao123, Hongyi Jing, Xiawei Liu, @PointsCoder @yuewang314 on Humanoid Everyday: A Comprehensive Robotic Dataset for Open-World Humanoid Manipulation https://t.co/cuj68kR0td Co-hosted by @micoolcho @chris_j_paxton
0
6
19
@RoboPapers
RoboPapers
4 days
Collecting robot teleoperation data for mobile manipulation is incredibly time consuming, even moreso than collecting teleoperation data for a stationary mobile manipulator. Fortunately, @LawrenceZhu22 and @PranavKuppili have a solution: EMMA, or Egocentric Mobile MAnipulation.
1
4
34
@RoboPapers
RoboPapers
7 days
Full episode dropping soon! Geeking out with @LawrenceZhu22 @PranavKuppili on EMMA: Scaling Mobile Manipulation via Egocentric Human Data https://t.co/1ssjFyaqoW Co-hosted by @micoolcho @chris_j_paxton
0
3
11
@RoboPapers
RoboPapers
7 days
Robots need to be able to apply pressure and make contact with objects as needed in order to accomplish their tasks. From compliance to working safely around humans to whole-body manipulation of heavy objects, combining force and position control can dramatically expand the
0
11
58
@RoboPapers
RoboPapers
7 days
Full episode dropping soon! Geeking out with @LawrenceZhu22 @PranavKuppili on EMMA: Scaling Mobile Manipulation via Egocentric Human Data https://t.co/1ssjFyaqoW Co-hosted by @micoolcho @chris_j_paxton
0
4
13
@RoboPapers
RoboPapers
8 days
Robots must often be able to move around and interact with objects in previously-unseen environments to be useful. And the interaction part is important; to do this, they must be able to perceive and interact with the world using onboard sensing. Enter VisualMimic. @Shaofeng_Yin
1
6
20
@RoboPapers
RoboPapers
9 days
Full episode dropping soon! Geeking out with @BaoxiongJ on UniFP: Learning a Unified Policy for Position and Force Control in Legged Loco-Manipulation https://t.co/iLEwnVoTjx Co-hosted by @micoolcho @chris_j_paxton
0
5
12
@RoboPapers
RoboPapers
9 days
Full episode dropping soon! Geeking out with @BaoxiongJ on UniFP: Learning a Unified Policy for Position and Force Control in Legged Loco-Manipulation https://t.co/iLEwnVoTjx Co-hosted by @micoolcho @chris_j_paxton
0
6
14
@RoboPapers
RoboPapers
10 days
For robots to be useful, they canโ€™t just dance โ€” they must be able to physically interact with the world around them. Unfortunately, the sorts of motion tracking policies you see performing dancing or martial arts are not really capable of the kind of precise, forceful
3
12
46
@RoboPapers
RoboPapers
10 days
For robots to be useful, they canโ€™t just dance โ€” they must be able to physically interact with the world around them. Unfortunately, the sorts of motion tracking policies you see performing dancing or martial arts are not really capable of the kind of precise, forceful
3
12
46