@robotsdigest
Robots Digest 🤖
14 days
Forget old datasets, Kinematify to turn any image or text into a 3D model of a movable object.
1
24
110

Replies

@robotsdigest
Robots Digest 🤖
14 days
It figures out how each part is connected that is,like hinges, joints, and sliders Using MCTS, it finds the kinematic tree and joint locations from just an image. It doesn't just guess, it solves the structure.
1
2
7
@robotsdigest
Robots Digest 🤖
14 days
The output is a physically consistent and functionally valid description. From text to 3D to real movement We cover motion planning, policy learning, and the most realistic virtual worlds
1
2
12
@robotsdigest
Robots Digest 🤖
14 days
0
2
13
@stepjamUK
Stephen James
1 day
Exciting news coming this Thursday (27th)! Stay tuned!
@Neuracore_AI
Neuracore
1 day
At Neuracore, our roots are in academia, and we have a special announcement coming on the 27th that we can’t wait to reveal.
1
8
95
@qineng_wang
Qineng Wang
2 days
Most VLM benchmarks watch the world; few ask how actions *change* it from a robot's eye. Embodied cognition tells us that intelligence isn't just watching – it's enacted through interaction. 👉We introduce ENACT: A benchmark that tests if VLMs can track the evolution of a
5
53
213
@chris_j_paxton
Chris Paxton
16 hours
Very cool work using human video to teach robots mobile manipulation behaviors!
@RoboPapers
RoboPapers
16 hours
Just collecting manipulation data isn’t enough for robots - they need to be able to move around in the world, which has a whole different set of challenges from pure manipulation. And bringing navigation and manipulation together in a single framework is even more challenging.
2
7
53
@robotsdigest
Robots Digest 🤖
1 day
Here’s where robotics is headed. AINA: a system that basically lets robots learn multi-fingered manipulation like learning to play piano by actually watching musicians.
2
8
62