USC RESL Profile
USC RESL

@uscresl

Followers
328
Following
23
Media
36
Statuses
93

Robotic Embedded Systems Laboratory @USC

Los Angeles, CA
Joined June 2009
Don't wanna be here? Send us removal request.
@uscresl
USC RESL
5 months
Excited to announce our #RSS2025 workshop: .Resource Constrained Robotics.Discover how robotics thrives under constraints — from limited compute to smarter algorithms enabling real-time, low-power autonomy. 📅 June 21 | 📍 Los Angeles, CA. 🔗
Tweet media one
1
6
7
@uscresl
USC RESL
10 months
Real-world VoxAct-B results on Open Jar and Open Drawer. 10/N
Tweet media one
Tweet media two
1
0
0
@uscresl
USC RESL
10 months
Right-acting, left-stabilizing evaluation rollouts from VoxAct-B. 9/N
1
0
0
@uscresl
USC RESL
10 months
Left-acting, right-stabilizing evaluation rollouts from VoxAct-B. 8/N
1
0
0
@uscresl
USC RESL
10 months
We compare against several strong baseline methods: ACT, Diffusion Policy, and VoxPoser. Each method is trained on 10 or 100 demonstrations, with one arm stabilizing and the other arm manipulating the object. 7/N
Tweet media one
1
0
0
@uscresl
USC RESL
10 months
For bimanual manipulation policies, we exploit the discretized action space that predicts the next best voxel by formulating a system that uses acting and stabilizing policies, enabling more efficient learning from multi-modal demos compared to a joint-space control policy. 6/N
Tweet media one
1
0
0
@uscresl
USC RESL
10 months
We use the object’s position with RGB-D images to reconstruct a voxel grid based on hyperparameter α that determines the size of the crop, allowing us to zoom into the more important region of interest. 👇Effects of α on the voxel resolution with the same number of voxels. 5/N
Tweet media one
1
0
0
@uscresl
USC RESL
10 months
The Vision Language Models (VLMs) output the segmentation mask of the object. We use its centroid along with point cloud data to retrieve the object’s pose. This is used to determine the task-specific roles of each arm and the language goal. 👇 VLMs pipeline. 4/N
Tweet media one
1
0
0
@uscresl
USC RESL
10 months
In this work (VoxAct-B), we retain the spatial equivariance benefits of voxel representations but reduce the cost of processing voxels by “zooming” into part of the voxel grid. It takes RGB-D images, two language goals, and proprioception data of two robot arms as input. 3/N
Tweet media one
1
0
0
@uscresl
USC RESL
10 months
Voxel representations, when coupled with discretized action spaces, can increase sample efficiency and generalization by introducing spatial equivariance into a learned system; however, processing them is computationally expensive. 2/N
Tweet media one
1
0
0
@uscresl
USC RESL
10 months
Tasks requiring two-hand coordination and fine-grained manipulation remain challenging for current robotic systems. Our CoRL 2024 paper proposes a sample-efficient, language-conditioned, voxel-based method that utilizes Vision Language Models to address these challenges. 🧵👇
1
6
51
@uscresl
USC RESL
2 years
Collision Avoidance and Navigation for a Quadrotor Swarm Using End-to-end Deep Reinforcement Learning.site: arxiv: A thread🧵, 5/5.
sites.google.com
Abstract
0
0
2
@uscresl
USC RESL
2 years
Conditionally Combining Robot Skills using Large Language Models.code: arxiv: A thread🧵, 4/5.
Tweet card summary image
github.com
Contribute to krzentner/language-world development by creating an account on GitHub.
1
0
2
@uscresl
USC RESL
2 years
CppFlow: Generative Inverse Kinematics for Efficient and Robust Cartesian Path Planning.site: arxiv: A thread🧵, 3/5.
1
0
1
@uscresl
USC RESL
2 years
HyperPPO: A scalable method for finding small policies for robotic control.site: arxiv: A thread🧵, 2/5.
1
0
3
@uscresl
USC RESL
2 years
RESL shall be presenting 4 papers at #ICRA2024! Congratulation to all the authors! .Presenters: @hegde_shashank, @Zhehui_Huang @JeremySMorgan3, @krzentner, @gauravsukhatme!. @ieee_ras_icra @CSatUSC @USCViterbi @USCMingHsiehEE . A thread🧵, 1/5
Tweet media one
1
8
26
@uscresl
USC RESL
2 years
Checkout our poster at CoRL, presented by @arthur801031 @GautamSalhotra on Tue Nov 7!.Website, code, videos: PDF: @gauravsukhatme @CSatUSC . End thread🧵, 9/9.
0
1
3
@uscresl
USC RESL
2 years
Of course, 2 picker to 1 picker is not the only cross morphology enabled by MAIL. Here we show instances of 3-to-2, 2-to-1, 3-to-1 transfer on a toy rearrangement task. This could be extendible to n-to-m pickers in general. 8/9
1
0
0