Dima Yanovsky Profile
Dima Yanovsky

@yanovskyd

Followers
874
Following
283
Media
13
Statuses
53

cs @mit

SF
Joined March 2012
Don't wanna be here? Send us removal request.
@ycombinator
Y Combinator
1 month
Prox (@try_prox) builds digital co-workers for 3PLs and fulfillment centers - companies that store, pack, and ship for brands. They automate the back-office operations layer that burns billions of labor hours across the industry.
10
12
70
@yanovskyd
Dima Yanovsky
4 months
4/4 We currently have 5 Apple Vision Pros, which means our throughput is roughly 40 hours of teleoperation data per day. This means we can collect more data per day than any existing open dataset in this category has ever collected. Here's how we did it: https://t.co/6EzATqUQXl
1
0
10
@yanovskyd
Dima Yanovsky
4 months
3/4 Not having those kinds of resources, we built an interface to teleoperate bimanual Shadow Hands in Apple Vision Pro. Robotics always starts with failure, so we spent weeks failing over and over again until we got the setup running smoothly at 30Hz.
1
0
8
@yanovskyd
Dima Yanovsky
4 months
2/4 Many things in robotics are hard, but some are insanely hard. Collecting large datasets of dextrous manipulation is the latter. If we wanted to collect a dataset using real two Shadow Hands (one of the most dexterous hands in the world) we'd have to spend $200K+ on hardware.
1
0
5
@yanovskyd
Dima Yanovsky
4 months
1/4 We recreated a $200k teleoperation setup in VR for just ~$2k. Now we can collect more dextrous manipulation data in a single day (40 hrs/day) than any existing open dataset has ever collected.
6
8
39
@yanovskyd
Dima Yanovsky
7 months
we outta here 🫡 @MIT
12
2
185
@yanovskyd
Dima Yanovsky
7 months
5/ App Store: https://t.co/E3vBhWzoE1 Project website: https://t.co/KQ6kvcXOmu The DART project was developed at the Improbable AI Lab at MIT CSAIL, under @younghyo_park and @pulkitology. DM me if you want to try in-person teleoperation in Vision Pro. I’ll be in SF this summer
0
0
7
@yanovskyd
Dima Yanovsky
7 months
4/ https://t.co/2u7Yggd0D1 runs beautifully on mobile devices as well, since it is powered by MuJoCo compiled to WebAssembly. And this isn’t just a video recording!! It’s the actual physics engine running in your browser, replaying the demonstrations in real time.
1
0
9
@yanovskyd
Dima Yanovsky
7 months
3/ Datasets you record go to https://t.co/2u7Yggd0D1, where you can curate your data and use it to train models. We're also releasing a small library of prerecorded teleoperations. Go play around with it!
1
0
3
@yanovskyd
Dima Yanovsky
7 months
2/We built Vision Pro app to teleoperate robots in AR. Vision Pro's SOTA hand detection/AR models deliver unmatched dexterity and user experience in teleoperation. We successfully ported MuJoCo physics engine by @Deepmind to run locally on Vision Pro chip The entire tech stack
1
0
4
@yanovskyd
Dima Yanovsky
7 months
1/Robotics foundation models need internet-scale data to train, and we bet simulation datasets will help reach this scale. It's time to build tools enabling this. So, we're releasing first public app for robot data collection in Vision Pro and a platform to curate datasets
2
3
25
@yanovskyd
Dima Yanovsky
9 months
🤝
@AI_Arav
Arav Kumar
9 months
i got to teleoperate robot hands with my hands using the vision pro. @yanovskyd is literally the coolest roboticst ever. crazy new robotics foundation models will be made with this data. we're so back.
0
1
21
@anzahorodnii
Andrii Zahorodnii
9 months
1/ Built an open-source version of @Neuralink demos with @yanovskyd using the brain of a monkey named Jenkins! Leader moves → Transformer generates synthetic spikes → MLP decodes it back → Follower mirrors movement. How hard is this as an ML problem? Let’s find out! 👉
24
26
100
@yanovskyd
Dima Yanovsky
9 months
7/ The challenge was that default solvers have no awareness of the robot's surroundings, sometimes creating trajectories through the table that caused the arm to slam into it. We used the ikpy library to create kinematics chains for both Koch V1.1 (@taurobotics) robotic arms.
0
1
21
@yanovskyd
Dima Yanovsky
9 months
6/ One big hurdle on the robotics side was working with inverse kinematics. The robots received coordinates to move to, then inverse kinematics solvers calculated servo angle positions to reach that point.
1
0
13
@yanovskyd
Dima Yanovsky
9 months
5/ Check out @anzahorodnii for the breakdown of the neuroscience ML behind the project Here is the web game to generate brain data: https://t.co/DOQbF1R8o6 Here is the project page: https://t.co/utr78VoiYE Arxiv: https://t.co/B1aDT9vrDl
1
1
20
@yanovskyd
Dima Yanovsky
9 months
4/ We then decided to cook more. What if we could go the other way and train a model to take robot movements and generate synthetic neural data that would correspond to thinking about these movements? After many days of work, @anzahorodnii trained the right model that can do it!
1
0
13
@yanovskyd
Dima Yanovsky
9 months
3/ It all started when we took a dataset of monkey brain activity during various hand movements. @anzahorodnii trained a model to decode this into velocities, and I configured a robot arm to recreate the movements this monkey was thinking about.
1
0
19
@yanovskyd
Dima Yanovsky
9 months
2/ We made the brain generation model run entirely in-browser using @onnxruntime. We coded a game where you can move a joystick and generate synthetic brain data in real time. This is the first OSS brain data generation model running in browser! Try it: https://t.co/8dyHjBM5HS
1
1
29