Ishika_S_ Profile Banner
Ishika Singh Profile
Ishika Singh

@Ishika_S_

Followers
753
Following
1K
Media
12
Statuses
191

PhD student @CSatUSC | @IITKanpur'21 | Research intern @Amazon FAR, prev. @NVIDIAAI SRL | Interested in #EmbodiedAI and #RobotLearning

Los Angeles, CA
Joined February 2019
Don't wanna be here? Send us removal request.
@Ishika_S_
Ishika Singh
5 months
VLAs have the potential to generalize over scenes and tasks, but require a ton of data to learn robust policies. We introduce OG-VLA, a novel architecture and learning framework that combines the generalization strengths of VLAs with the robustness of 3D-aware policies. 🧵
1
26
145
@1x_tech
1X
6 days
NEO The Home Robot Order Today
7K
10K
70K
@yigitkkorkmaz
Yiğit Korkmaz
13 days
Can Q-learning alone handle continuous actions? Value-based RL (like DQN) is simple & stable, but typically limited to discrete actions. Continuous control usually needs actor-critic methods (DDPG, TD3, SAC) that are powerful but unstable & can get stuck in local optima.
5
15
105
@imankitgoyal
Ankit Goyal
19 days
What's the right architecture for a VLA? VLM + custom action heads (π₀)? VLM with special discrete action tokens (OpenVLA)? Custom design on top of the VLM (OpenVLA-OFT)? Or... VLM with ZERO modifications? Just predict action as text. The results will surprise you. VLA-0:
18
71
527
@dimensionalos
Dimensional
23 days
Open anything. On any arm. 99% success rate.
13
7
75
@larsankile
Lars Ankile
1 month
How can we enable finetuning of humanoid manipulation policies, directly in the real world? In our new paper, Residual Off-Policy RL for Finetuning BC Policies, we demonstrate real-world RL on a bimanual humanoid with 5-fingered hands (29 DoF) and improve pre-trained policies
8
50
228
@Weiyu_Liu_
Weiyu Liu
1 month
I’m at #CoRL2025 in Seoul this week! I’m looking for students to join my lab next year, and also for folks excited to build robotic foundation models at a startup. If you’re into generalization, planning and reasoning, or robots that use language, let's chat!
2
7
49
@siddancha
Siddharth Ancha
6 months
Diffusion/flow policies 🤖 sample a “trajectory of trajectories” — a diffusion/flow trajectory of action trajectories. Seems wasteful? Presenting Streaming Flow Policy that simplifies and speeds up diffusion/flow policies by treating action trajectories as flow trajectories! 🌐
2
23
138
@LawrenceZhu22
Lawrence Yunzhou Zhu
2 months
Can we scale up mobile manipulation with egocentric human data? Meet EMMA: Egocentric Mobile MAnipulation EMMA learns from human mobile manipulation + static robot data — no mobile teleop needed! EMMA generalizes to new scenes and scales strongly with added human data. 1/9
10
64
413
@KKawaharazuka
Kento Kawaharazuka / 河原塚 健人
3 months
🚀 Our Survey on Vision-Language-Action Models for Real-World Robotic Applications is out! It’s a full-stack, comprehensive survey, integrating both software and hardware for VLA. A collaboration with @JIHOONOH8 @junjungoal @IngmarPosner @yukez https://t.co/WQzWLvF1oh Thread👇
3
55
218
@Ayushj240
Ayush Jain
3 months
Honored that our @RL_Conference paper won the Outstanding Paper Award on Empirical Reinforcement Learning Research! 📜Mitigating Suboptimality of Deterministic Policy Gradients in Complex Q-Functions 📎 https://t.co/owm0hVVsUK Grateful to my advisors @JosephLim_AI and @ebiyik_!
@Ayushj240
Ayush Jain
3 months
At @RL_Conference🍁, I'm presenting a talk and a poster on Aug 6, Track 1: Reinforcement Learning Algorithms. We find that Deterministic Policy Gradient methods like TD3 often get stuck at local optima under complex Q-functions, and propose a novel actor architecture! 🧵
9
10
72
@unnatjain2010
Unnat Jain
4 months
Research arc: ⏪ 2 yrs ago, we introduced VRB: learning from hours of human videos to cut down teleop (Gibson🙏) ▶️ Today, we explore a wilder path: robots deployed with no teleop, no human demos, no affordances. Just raw video generation magic 🙏 Day 1 of faculty life done! 😉
@shivanshpatel35
Shivansh Patel
4 months
🚀 Introducing RIGVid: Robots Imitating Generated Videos! Robots can now perform complex tasks—pouring, wiping, mixing—just by imitating generated videos, purely zero-shot! No teleop. No OpenX/DROID/Ego4D. No videos of human demonstrations. Only AI generated video demos 🧵👇
4
30
133
@gs_ai_
Genesis AI
4 months
Today, We’re launching Genesis AI — a global physical AI lab and full-stack robotics company — to build generalist robots and unlock unlimited physical labor. We’re backed by $105M in seed funding from @EclipseVentures, @khoslaventures, @Bpifrance, HSG, and visionaries
22
94
423
@v_debakker
Vincent de Bakker
4 months
Can we teach dexterous robot hands manipulation without human demos or hand-crafted rewards? Our key insight: Use Vision-Language Models (VLMs) to scaffold coarse motion plans, then train an RL agent to execute them with 3D keypoints as the interface. 1/7
1
14
64
@ChengleiSi
CLS
4 months
Are AI scientists already better than human researchers? We recruited 43 PhD students to spend 3 months executing research ideas proposed by an LLM agent vs human experts. Main finding: LLM ideas result in worse projects than human ideas.
12
191
633
@shahdhruv_
Dhruv Shah
4 months
Yesterday, we live demo-ed a “generalist” VLA for (I think) the first time ever to a broad audience @RoboticsSciSys. Bring any object. Ask anything. New environment, new instructions, no fine-tuning. Just impeccable vibes! ✨
7
29
339
@RoboticsSciSys
Robotics: Science and Systems
4 months
🍽️ Dinner’s on, and Day 2 is a wrap! What a day of brilliant talks, posters, and community at #RSS2025 🤖 Looking ahead to Day 3: 🌟 Early Career Spotlight: Zac Manchester 🎤 Keynote by Trevor Darrell 🧠 Sessions on HRI, Multi-Robot Systems, Control & Dynamics, and more!
0
3
36
@RSSPioneers
RSS Pioneers
4 months
Hope you have an enjoyable start of the @RoboticsSciSys 2025! On behalf of the organizing committee, we'd like to thank everyone for their contributions to the #RSSPioneers2025 workshop! We hope that it was an unforgettable and inspiring experience for our Pioneers!
0
8
42
@DJiafei
Jiafei Duan
5 months
Happening right now at Room 101A, Robo3D VLM for manipulation.
2
3
22
@leto__jean
Jeannette Bohg
5 months
Imagine you have collected all these training demonstrations for a manipulation policy on a static robot. What if you could just re-use that data on a mobile robot? We found a new way to mobilize your policy without collecting any new demonstrations. 🏃‍♀️🤖 Check out Mobi-π!
@yjy0625
Jingyun Yang
5 months
Introducing Mobi-π: Mobilizing Your Robot Learning Policy. Our method: ✈️ enables flexible mobile skill chaining 🪶 without requiring additional policy training data 🏠 while scaling to unseen scenes 🧵↓
1
10
66
@Ishika_S_
Ishika Singh
5 months
0
0
1