Shuo Sha
@shashuo0104
Followers
48
Following
14
Media
5
Statuses
19
Undergrad at @Columbia #Robotics
New York, USA
Joined July 2024
[1/5] Fine-grained teleop is slow, error-prone, and frustrating even for experts. We introduce a real2sim2real shared autonomy framework that learns a residual copilot for low-level corrections. It enables: ๐ฎ fine-grained teleop for novices ๐ค a copilot learned from <5 min of
2
6
36
Thanks Kaifeng for sharing!! As an embodiment-agnostic framework, Iโm also curious to see how Residual Copilot might apply to other robotic platforms.
Despite the success of whole body RL for humanoid teleoperation, teleoperation for manipulation (parallel jaw gripper) has mostly just been direct joint space or Cartesian space mapping. This new work from our lab shows that for contact-rich manipulation such as industrial
0
0
1
Teleoperation is often used to scale robot data collection. But anyone who has actually done fine-grained teleop knows how hard it is, even for experienced operators: precise alignment, axis-constrained rotation, contact regulation, and millimeter-level control are extremely
[1/5] Fine-grained teleop is slow, error-prone, and frustrating even for experts. We introduce a real2sim2real shared autonomy framework that learns a residual copilot for low-level corrections. It enables: ๐ฎ fine-grained teleop for novices ๐ค a copilot learned from <5 min of
0
3
29
Huge thanks to all my collaborators @YXWangBot @binghao_huang for making this work possible!! Grateful to my advisors @antoniloq @YunzhuLiYZ for their guidance!!
0
1
5
Links โ Website: https://t.co/q6eeJ0s8Dv Sim code: https://t.co/vsyMlalOCH Deploy code: https://t.co/lmfHRQrtvO Paper:
1
1
4
[5/5] Challenge #2: Assistance must respect user control authority. A residual copilot makes only local corrections -> preserving user intent. We visualize the learned residual behaviors in simulation.
1
1
3
[4/5] Challenge #1: Learning assistive behaviors in simulation requires a human surrogate that: โข uses very little data โข behaves human-like in unseen states Our solution: a simple yet surprisingly effective kNN human surrogate.
1
1
4
[3/5] Copilot-assisted teleop produces better demonstrations. With the same number of demos, imitation policies trained on copilot data perform significantly better. The copilot improves the structure and consistency of successful trajectories.
1
1
4
[2/5] The residual copilot improves performance across assembly tasks: ๐ฉ Nut Threading: 40% -> 100% success โ๏ธ Gear Meshing: 16.4s -> 10.9s completion ๐ Peg Insertion: 30.3s -> 18.5s completion Observed across different subjects in our user study.
1
1
4
Pixel-based world models are clearly the most scalable path, and @YXWangBot is proving that we donโt have to sacrifice physical consistency to get there! The visual appearances are so convincing that I keep getting fooled on which ones are real vs generated.
1/ World models are getting popular in robotics ๐คโจ But thereโs a big problem: most are slow and break physical consistency over long horizons. 2/ Today weโre releasing Interactive World Simulator: An action-conditioned world model that supports stable long-horizon interaction.
0
0
7
๐ง๐ฒ๐๐๐ถ๐ป๐ด ๐ฟ๐ผ๐ฏ๐ผ๐ ๐ฝ๐ผ๐น๐ถ๐ฐ๐ถ๐ฒ๐ ๐ถ๐ป ๐๐ต๐ฒ ๐ฟ๐ฒ๐ฎ๐น ๐๐ผ๐ฟ๐น๐ฑ ๐ถ๐ ๐ฒ๐
๐ฝ๐ฒ๐ป๐๐ถ๐๐ฒ, ๐๐น๐ผ๐, ๐ฎ๐ป๐ฑ ๐ต๐ฎ๐ฟ๐ฑ ๐๐ผ ๐ฟ๐ฒ๐ฝ๐ฟ๐ผ๐ฑ๐๐ฐ๐ฒ. ๐๐๐ @Columbia ๐ฎ๐ป๐ฑ @SceniXai ๐ท๐๐๐ ๐ฏ๐๐ถ๐น๐ ๐ฎ ๐๐ถ๐บ๐๐น๐ฎ๐๐ผ๐ฟ ๐๐ต๐ฎ๐ ๐ฎ๐ฐ๐๐๐ฎ๐น๐น๐ ๐๐ผ๐ฟ๐ธ๐. They
4
8
54
If you are working on real-to-sim, simulating digital twins, and policy evaluation, you should check out our fully open-sourced code base. Lots of handy tools for building Gaussian Splatting simulators and interacting with it! https://t.co/Ovpm0hFdYn Will continue to be
github.com
Open-source code of the paper: Real-to-Sim Robot Policy Evaluation with Gaussian Splatting Simulation of Soft-Body Interactions. - kywind/real2sim-eval
4
40
289
Thanks @kaiwynd! Super excited to share our unified policy training & inference repo -- used to train Diffusion Policy, ACT, Pi0, and SmolVLA, and to rollout policies in both the real-world and our real2sim simulator (available at https://t.co/Kmya0tWIxZ) policy training repo:
Also check out our policy training repo: supporting two main frmeworks, Lerobot @LeRobotHF and OpenPI @physical_int , with a unified data & inference interface. https://t.co/zht92pZ47Z Made by my amazing collaborator @shashuo0104
0
0
2
I want to call out one of our most important references: SIMPLER ( https://t.co/8xoHzaF46Z), by @XuanlinLi2, @kylehkhsu, @Jiayuan_Gu, @jiajunwu_cs, @haosu_twitr, @QuanVng, @xiao_ted, and colleagues, which laid the foundation for using simulation for policy evaluation through a
๐ข Announcing one of the most exciting works from us this year on **scalable robot policy evaluation through real-to-sim transfer**, moving toward a scalable evaluation engine with structured world models that capture the appearance, geometry, and dynamics of environments
5
11
95
๐ข Announcing one of the most exciting works from us this year on **scalable robot policy evaluation through real-to-sim transfer**, moving toward a scalable evaluation engine with structured world models that capture the appearance, geometry, and dynamics of environments
๐งต Evaluating robot policies in the real world is slow, expensive, and hard to scale. During my internship at @SceniXai this summer, we had many discussions around the two key questions: how accurate must a simulator be for evaluation to be meaningful, and how do we get there?
2
30
213
๐งต Evaluating robot policies in the real world is slow, expensive, and hard to scale. During my internship at @SceniXai this summer, we had many discussions around the two key questions: how accurate must a simulator be for evaluation to be meaningful, and how do we get there?
8
32
151