saihaneesh_allu Profile Banner
Sai Haneesh Allu Profile
Sai Haneesh Allu

@saihaneesh_allu

Followers
52
Following
225
Media
8
Statuses
67

PhD Candidate @IRVLUTD @UT_Dallas. Research focus on #RobotLearning for Mobile Manipulation. Actively seeking internship oppurtunities for Spring/Summer.

Dallas, TX
Joined July 2024
Don't wanna be here? Send us removal request.
@saihaneesh_allu
Sai Haneesh Allu
1 hour
Excited to finally share our system #HRT1 — that transfers a single human demonstration skill to a robot, enabling fully autonomous mobile manipulation in new environments with NO training. One human demo → trajectory transfer → real-world execution.
@IRVLUTD
Intelligent Robotics and Vision Lab @ UTDallas
2 hours
Many #robot_learning works use human videos but need lots of data/retraining. We present #HRT1 — a robot learns from just one human video and performs mobile manipulation tasks in new environments with relocated objects — via trajectory transfer.🔗 https://t.co/zbOGWrSHAF (1/11)
1
1
1
@YuXiang_IRVL
Yu Xiang
33 minutes
This is how we use human videos for #RobotManipulation Present HRT1: One-Shot Human-to-Robot Trajectory Transfer https://t.co/BnNxWTQyZG 1️⃣ Collect demos 2️⃣ Extract 3D hand motion 3️⃣ Transfer to robot 4️⃣ Optimize base & trajectory ✅ One-shot imitation for mobile manipulation👇
@IRVLUTD
Intelligent Robotics and Vision Lab @ UTDallas
2 hours
Many #robot_learning works use human videos but need lots of data/retraining. We present #HRT1 — a robot learns from just one human video and performs mobile manipulation tasks in new environments with relocated objects — via trajectory transfer.🔗 https://t.co/zbOGWrSHAF (1/11)
0
1
7
@saihaneesh_allu
Sai Haneesh Allu
54 minutes
Paper: https://t.co/RjNFiWxqEQ Code: https://t.co/h5bNsASMHp Web page: https://t.co/G2eMhYdQag Grateful for collaboration @jishnu_jaykumar and the guidance of my advisor @YuXiang_IRVL
0
0
1
@LuccaChiang
Guangqi Jiang
14 days
Ever want to enjoy all the privileged information in sim while seamlessly transferring to the real world? How can we correct policy mistakes after deployment? 👉Introducing GSWorld, a real2sim2real photo-realistic simulator with interaction physics with fully open-sourced code.
6
63
270
@mateoguaman
Mateo Guaman Castro
14 days
How can we create a single navigation policy that works for different robots in diverse environments AND can reach navigation goals with high precision? Happy to share our new paper, "VAMOS: A Hierarchical Vision-Language-Action Model for Capability-Modulated and Steerable
4
40
119
@IRVLUTD
Intelligent Robotics and Vision Lab @ UTDallas
21 days
We have two papers accepted to #IROS2025 🎉 Unfortunately, due to EOGA48 in Texas, we won’t be able to attend the conference. Please feel free to reach out to the authors if you’re interested in our work!
0
4
12
@YuXiang_IRVL
Yu Xiang
2 months
1
1
5
@YuXiang_IRVL
Yu Xiang
2 months
We @IRVLUTD had some visitors today: the FTC robotics team from Flower Mound High School. We showed several demos: • Mobile manipulation Fetch • Teleoperation SO-101 and Koch • In-hand manipulation LEAP Exciting to see how curious and engaged they are🤖 #STEM #Robotics
5
1
14
@JiahuiZhang__32
Jiahui Zhang
2 months
Thrilled to share our work ReWiND has been accepted at CoRL 2025 🎉 It will be the first oral talk on Sunday at 9AM KST — you won’t want to miss it! Unfortunately, I won’t be able to travel to Korea in person, but @Jesse_Y_Zhang and @_abraranwar will be presenting our work. 🧵
@Jesse_Y_Zhang
Jesse Zhang
2 months
Thrilled to share that ReWiND kicks off CoRL as the very first oral talk! 🥳 📅 Sunday, 9AM — don’t miss it! @_abraranwar and I dive deeper into specializing robot policies in our USC RASC blog post (feat. ReWiND + related work): 👉
1
5
17
@saihaneesh_allu
Sai Haneesh Allu
3 months
We are building a reliable system that can let robots acquire human skills. Some exciting results are coming up soon !!! #irvl #robotics #mobilemanipulation
@YuXiang_IRVL
Yu Xiang
3 months
Teaching our robot new skills from human demonstration videos — excited to share more soon with @saihaneesh_allu @jis_padalunkal @IRVLUTD
0
0
4
@goyal__pramod
Pramod Goyal
4 months
Link: https://t.co/NCUsYVCh1Z By: Peter Bloem
1
32
195
@ahadj0
ahad
5 months
i’m happy to present chessbot. a robot that can move your chess piece from place to place. we built this in under 24 hours at the @LeRobotHF hackathon. here’s a quick demo!
46
42
508
@tomssilver
Tom Silver
6 months
#CoRL2025 reviewers and authors beware: lower is better!
0
2
46
@ryan_hoque
Ryan Hoque
6 months
Imitation learning has a data scarcity problem. Introducing EgoDex from Apple, the largest and most diverse dataset of dexterous human manipulation to date — 829 hours of egocentric video + paired 3D hand poses across 194 tasks. Now on arxiv: https://t.co/ogu7lDIsWV (1/4)
15
94
609
@YuXiang_IRVL
Yu Xiang
6 months
Finished writing the reports for 6 #IROS2025 submissions. Thanks to the reviewers who accepted my invitations! ICRA and IROS have relatively long review cycles and lack a rebuttal stage — a point worth considering for the robotics community @ieeeras
1
3
28
@HaoranGeng2
Haoran Geng
7 months
In my past research experience, finding or developing an appropriate simulation environment, dataset, and benchmark has always been a challenge. Missing features, limited support, or unexpected bugs often occupied my days and nights. Moreover, current simulation platforms are
5
47
239
@DJiafei
Jiafei Duan
9 months
Can we build a generalist robotic policy that doesn’t just memorize training data and regurgitate it during test time, but instead remembers past actions as memory and conditions its decisions on them?🤖💡 Introducing SAM2Act—a multi-view robotic transformer-based policy that
7
87
425
@SkydioHQ
Skydio
7 months
Honk if you ❤️ American drones! The Skydio Dock for X10 Roadshow is bringing autonomous Drone as First Responder capabilities to public safety agencies nationwide. After debuting at #AxonWeek25 in PHX, we’re eastbound to ATL. 🇺🇸 📍 Where should we go next?
2
1
6
@_amirbar
Amir Bar
11 months
Happy to share our new work on Navigation World Models! 🔥🔥 Navigation is a fundamental skill of agents with visual-motor capabilities. We train a single World Model across multiple environments and diverse agent data. w/ @GaoyueZhou, Danny Tran, @trevordarrell and @ylecun.
5
61
276
@RemiCadene
Remi Cadene
7 months
Thanks Steven Palma! We will update the README to highlight the 3 versions of LeKiwi: - cables only (no battery) - remote control with RaspberryPi (inference done on laptop) - on-board compute with Jetson Nano https://t.co/q3aMR50fRR
github.com
LeKiwi - Low-Cost Mobile Manipulator. Contribute to SIGRobotics-UIUC/LeKiwi development by creating an account on GitHub.
3
4
33