Ademi Adeniji Profile
Ademi Adeniji

@AdemiAdeniji

Followers
758
Following
291
Media
34
Statuses
87

PhD @UCBerkeley. Prev @NVIDIAAI, @Google, @Stanford. Reinforcement Learning, Robot Learning

Berkeley, CA
Joined May 2022
Don't wanna be here? Send us removal request.
@AdemiAdeniji
Ademi Adeniji
3 months
Everyday human data is robotics’ answer to internet-scale tokens. But how can robots learn to feel—just from videos?📹. Introducing FeelTheForce (FTF): force-sensitive manipulation policies learned from natural human interactions🖐️🤖. 👉 1/n
11
38
222
@AdemiAdeniji
Ademi Adeniji
1 month
RT @HaoranGeng2: 🤖 What if a humanoid robot could make a hamburger from raw ingredients—all the way to your plate?. 🔥 Excited to announce V….
0
119
0
@grok
Grok
2 days
What do you want to know?.
89
13
162
@AdemiAdeniji
Ademi Adeniji
2 months
RT @RoboPapers: Full episode dropping soon!. Geeking out with @vincentjliu @AdemiAdeniji on EgoZero: Robot Learning from Smart Glasses http….
0
5
0
@AdemiAdeniji
Ademi Adeniji
2 months
RT @Raunaqmb: Tactile sensing is gaining traction, but slowly. Why? Because integration remains difficult. But what if adding touch sensors….
0
104
0
@AdemiAdeniji
Ademi Adeniji
2 months
RT @zhaohengyin: Just open-sourced Geometric Retargeting (GeoRT) — the kinematic retargeting module behind DexterityGen. Includes tools fo….
0
11
0
@AdemiAdeniji
Ademi Adeniji
2 months
FeelTheForce is now open-source! 🤖🖐️. We’ve released the full codebase from our paper:.📡 Streaming infrastructure with docs.🧹 Preprocessing for multi-modal data.🎓 Training pipelines & commands.🧠 Inference code for force-sensitive policies. 🛠️Code:
Tweet card summary image
github.com
Contribute to feel-the-force-ftf/feel-the-force development by creating an account on GitHub.
@AdemiAdeniji
Ademi Adeniji
3 months
Everyday human data is robotics’ answer to internet-scale tokens. But how can robots learn to feel—just from videos?📹. Introducing FeelTheForce (FTF): force-sensitive manipulation policies learned from natural human interactions🖐️🤖. 👉 1/n
0
1
13
@AdemiAdeniji
Ademi Adeniji
3 months
RT @vincentjliu: We just open-sourced EgoZero!. It includes the full preprocessing to turn long-form recordings into individual demonstrati….
0
3
0
@AdemiAdeniji
Ademi Adeniji
3 months
This work would not have been possible without my amazing collaborators.@JoliaChen @vincentjliu @venkyp2000 @haldar_siddhant @Raunaqmb @pabbeel @lerrelpinto. Major acknowledgments to Point-Policy, great work by @haldar_siddhant that enabled FTF!. 10/n.
0
0
9
@AdemiAdeniji
Ademi Adeniji
3 months
FTF is officially released🎉. Train your own force-sensitive robot policies with just 15 minutes of human video! Datasets and code coming soon…. 🌐Website: 📄Paper: 9/n.
1
0
8
@AdemiAdeniji
Ademi Adeniji
3 months
FTF treats human interactions as the ground truth driving force-aware manipulation. By separating closed-loop force modeling from low-level force reproduction, FTF achieves robust policies that excel at reasoning under contact—mastering challenging force-sensitive tasks!🦾. 8/n
1
0
6
@AdemiAdeniji
Ademi Adeniji
3 months
We let the robot feel! FTF uses an inference-time PD controller to adjust the gripper closure based on real-time feedback from the robot’s tactile sensors. FTF handles continuous closure reactively—letting the robot dynamically respond to the forces it senses. 7/n
1
0
6
@AdemiAdeniji
Ademi Adeniji
3 months
We train a closed-loop policy on human demonstrations that predicts exerted forces alongside conventional actions. Point-Policy bridges the gap—transferring proprioceptive human actions to the robot. But how do we reproduce those desired forces during inference?. 6/n
Tweet media one
1
0
7
@AdemiAdeniji
Ademi Adeniji
3 months
We designed a low-cost latex tactile glove🧤embedded with the Anyskin sensor to capture human demonstrations via third-person cameras. By equipping the robot gripper with the same tactile sensing, the robot can now feel just like a human does!. 5/n
Tweet media one
1
0
10
@AdemiAdeniji
Ademi Adeniji
3 months
With FTF, a robot learns to delicately pick and place a real egg—using just 15 minutes of human data. 💡 The key insight: model the forces exerted by the human hand during demonstration, then use an inference-time controller to reproduce those forces. 4/n
1
0
7
@AdemiAdeniji
Ademi Adeniji
3 months
Some approaches attempt to model continuous gripper closure, but this is notoriously difficult—especially when actions must be inferred from third-person human videos. Visual ambiguity and lack of direct force supervision make fine-grained control hard to learn. 3/n
1
0
7
@AdemiAdeniji
Ademi Adeniji
3 months
Robot policies fail spectacularly on force-sensitive tasks. Why? Most treat the gripper like a simple on/off switch—open or close. The result? 🥚💔 Often a bit too heavy-handed. 2/n
1
0
9
@AdemiAdeniji
Ademi Adeniji
3 months
RT @younggyoseo: Excited to present FastTD3: a simple, fast, and capable off-policy RL algorithm for humanoid control -- with an open-sourc….
0
114
0
@AdemiAdeniji
Ademi Adeniji
3 months
RT @vincentjliu: I think the most interesting insight from EgoZero is the tradeoff between 2D/3D representations in human-to-robot learning….
0
3
0
@AdemiAdeniji
Ademi Adeniji
3 months
Closed-loop robot policies directly from human interactions. No teleop, no robot data co-training, no RL, and no sim. Just Aria smart glasses. Everyday human data is passively scalable and a massively underutilized resource in robotics. More to come here in the coming weeks.
@vincentjliu
Vincent Liu
3 months
The future of robotics isn't in the lab – it's in your hands. Can we teach robots to act in the real world without a single robot demonstration?. Introducing EgoZero. Train real-world robot policies from human-first egocentric data. No robots. No teleop. Just Aria glasses and
4
12
71
@AdemiAdeniji
Ademi Adeniji
4 months
RT @irmakkguzey: Despite great advances in learning dexterity, hardware remains a major bottleneck. Most dexterous hands are either bulky,….
0
100
0