Arjun Gupta
@arjun__gupta
Followers
218
Following
302
Media
6
Statuses
79
PhD Student at UIUC
Joined July 2020
Thrilled to share our work AlphaOne🔥 at @emnlpmeeting 2025, @jyzhang1208 and I will be presenting this work online, and please feel free to join and talk to us!!! 📆Date: 8:00-9:00, Nov 7, Friday (Beijing Standard Time, UTC+8) 📺Session: Gather Session 4
💥Excited to share our paper “AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time” at #EMNLP2025 🚀 this Friday, Nov. 7, during Gather Session 4. Come say hi virtually!👋 📄Paper: https://t.co/CksN8hEuoF 🪩Website & Code: https://t.co/AwMLAQFvtz
#AI #LLMs #Reasoning
0
1
6
I rarely get to share the specific embodiments I work with every day, but I'm thrilled to finally see this one go public Meet Amazon's newest robotic system: Blue Jay!
12
19
160
Synthesizing robot data in simulation is a promising way for scaling up. While most prior work focuses on static manipulation, check out our new work MoMaGen led by @ChengshuEricLi @mengdixu_ @ArpitBahety @Hang_Yin_ , where we extend data synthesis to mobile manipulation. MoMaGen
We are excited to release MoMaGen, a data generation method for multi-step bimanual mobile manipulation. MoMaGen turns 1 human-teleoped robot trajectory into 1000s of generated trajectories automatically.🚀 Website: https://t.co/DYKvqY4bII arXiv: https://t.co/lDffi0FXHl
0
2
30
We are excited to release MoMaGen, a data generation method for multi-step bimanual mobile manipulation. MoMaGen turns 1 human-teleoped robot trajectory into 1000s of generated trajectories automatically.🚀 Website: https://t.co/DYKvqY4bII arXiv: https://t.co/lDffi0FXHl
1
36
160
What if robots could decide when to see and when to feel like humans? We built a system that lets them. Multi-Modal Policy Consensus learns to balance vision 👁️ and touch ✋. 🌐 Project: https://t.co/4UIr8hwJHy 1/N
4
29
150
💾 Data from across the country. 🚁 No access to the drone. 🤖 Still works, zero-shot. UMI-on-Air makes large-scale data collection → real-world deployment possible.
1
4
37
Bored of working on toy tasks in the lab? We are solving robot manipulation at massive real world scales with Vulcan at @amazon I am at #CoRL2025 and my team is looking for PhD research interns, postodcs, and scientists https://t.co/cXGrLDIGEi
aboutamazon.com
Built on advances in robotics, engineering, and physical AI, Vulcan is making our workers’ jobs easier and safer while moving orders more efficiently.
1
2
16
Don’t miss out on our #CoRL2025 paper 👉 https://t.co/7lAYq0xsXg Tool-as-Interface: Learning Robot Policies from Observing Human Tool Use Robots learn robust and generalizable manipulation skills directly from human tool-use videos, bridging the embodiment gap without
How can we train robot policies without any robot data—just using two-view videos of humans manipulating tools? Check out our new paper: "Tool-as-Interface: Learning Robot Policies from Human Tool Usage through Imitation Learning" Honored to be a Best Paper Finalist at the
1
2
5
How do you build a robot that can open unfamiliar objects in new places? This study put mobile manipulation systems through 100+ real-world tests and found that perception, not precision, is the real challenge.🤖 ▶️ https://t.co/QErRNfRdQV 📑 https://t.co/K5M8BrnWCZ
0
6
33
🚀 Introducing RIGVid: Robots Imitating Generated Videos! Robots can now perform complex tasks—pouring, wiping, mixing—just by imitating generated videos, purely zero-shot! No teleop. No OpenX/DROID/Ego4D. No videos of human demonstrations. Only AI generated video demos 🧵👇
3
34
147
Come by the @GoogleDeepMind booth at @RoboticsSciSys conference in LA! We’re demoing Gemini Robotics On-Device live, come check it out
Excited to release Gemini Robotics On-Device and bunch of goodies today 🍬 on-device VLA that you can run on a GPU 🍬 open-source MuJoCo sim (& benchmark) for bimanual dexterity 🍬 broadening access to these models to academics and developers https://t.co/mSjXTLuOeu
1
9
73
How can we build mobile manipulation systems that generalize to novel objects and environments? Come check out MOSART at #RSS2025! Paper: https://t.co/s60i7c5nhp Project webpage: https://t.co/i2wphF9Ehl Code: https://t.co/YeKL8fM8FM
0
9
40
Workshop on Mobile Manipulation in #RSS2025 kicking off with a talk from @leto__jean! Come by EEB 132 if you’re here in person, or join us on Zoom (link on the website)
1
1
15
🚀 #RSS2025 sneak peek! We teach robots to shimmy objects with fingertip micro-vibrations precisely—no regrasp, no fixtures. 🎶⚙️ Watch Vib2Move in action 👇 https://t.co/XXooCIeKwR
#robotics #dexterousManipulation
vib2move.github.io
VIB2MOVE is a novel approach for in-hand object reconfiguration that uses fingertip micro-vibrations and gravity to precisely reposition planar objects.
1
7
44
Soaking up the sun at the Robotics: Science and Systems conference in Los Angeles this weekend? Stop by the Hello Robot booth to say hi and get a hands on look at Stretch! Hope to see you there 😎 https://t.co/Xz0XRuIDoW
1
3
18
This was a key feature in enabling DexterityGen, our teleop that can support tasks like using a screw driver Led by @zhaohengyin, now open source
Just open-sourced Geometric Retargeting (GeoRT) — the kinematic retargeting module behind DexterityGen. Includes tools for importing custom hands. Give it a try: https://t.co/MxSuitRaDM A software by @berkeley_ai and @AIatMeta. More coming soon.
1
1
7
Our paper, "One-Shot Real-to-Sim via End-to-End Differentiable Simulation and Rendering", was recently published at IEEE RA-L. Our method turns a single RGB-D video of a robot interacting with the environment, along with the tactile measurements, into a generalizable world model.
1
5
46
Sparsh-skin, our next iteration of general pretrained touch representations Skin like tactile sensing is catching up on the prominent vision-based sensors with the explosion of new dexteorus hands A crucial step in leveraging full hand sensing; work led by @akashshrm02 🧵👇
Robots need touch for human-like hands to reach the goal of general manipulation. However, approaches today don’t use tactile sensing or use specific architectures per tactile task. Can 1 model improve many tactile tasks? 🌟Introducing Sparsh-skin: https://t.co/DgTq9OPMap 1/6
0
1
7
How can we train robot policies without any robot data—just using two-view videos of humans manipulating tools? Check out our new paper: "Tool-as-Interface: Learning Robot Policies from Human Tool Usage through Imitation Learning" Honored to be a Best Paper Finalist at the
sairlab.org
A series of interactive talks on foundation models and neuro-symbolic AI for robotics.
2
2
10
HumanUp has been accepted by #RSS2025 looking forward to seeing you in LA this June!
🤖 Want to train a humanoid to stand up safely and smoothly? Try HumanUP: Sim-to-Real Humanoid Getting-Up Policy Learning! 🚀 ✨ HumanUP is a two-stage RL framework that enables humanoid robots to stand up from any pose(facing up and down) with stability and safety. Check out
3
1
24