Nicklas Hansen
@ncklashansen
Followers
2K
Following
1K
Media
51
Statuses
292
PhD candidate @UCSanDiego. Prev: @nvidia, @MetaAI, @UCBerkeley, @DTU_Compute. I like robots 🤖, plants 🪴, and they/them pronouns 🏳️🌈
San Diego, CA
Joined August 2018
🥳Excited to share: Hierarchical World Models as Visual Whole-Body Humanoid Controllers Joint work with @jyothir_s_v @vlad_is_ai @ylecun @xiaolonw @haosu_twitr Our method, Puppeteer, learns high-dim humanoid policies that look natural, in an entirely data-driven way! 🧵👇(1/n)
12
64
379
See Nicklas Hansen's amazing work on robot perception, planning, and action in dynamic environments (NVIDIA Graduate Research Fellow and PhD candidate at @UCSanDiego). ✨🤖 #NationalRoboticsWeek Learn more about Nicklas's work. ➡️ https://t.co/1dXc4nRRfT
9
24
127
TD-MPC2 just got ~5x faster! The update is available in a separate "speedups" branch at the moment and will be merged into main soon. Special thanks to @VincentMoens for his help with this!
3
7
83
The third @LeRobotHF research presentation on TDMPC 1 and 2 by @ncklashansen is available on youtube: https://t.co/SxPSOMSkB5 Don't miss Nicklas' latest projects on https://t.co/dVhCjKL9Bo Project page: https://t.co/RKmdin4Sz2 Diffusion Policy paper: https://t.co/Y05CCF7uhN
1
10
78
New fun benchmark for RL researchers to stress test their algorithms! We release code + human driver data for imitation, and benchmark TD-MPC2 in a multi-track setting 🏎️💨
Besides reading cool papers, my Twitter account is mostly used for catching up @F1 news. Very excited about @LandoNorris 's great performance recently. Now we combine Formula Racing to AI research: The following video shows we train a Reinforcement Learning policy to drive a
2
6
64
Cannot believe this finally happened! Over the last 1.5 years, we have been developing a new LLM architecture, with linear complexity and expressive hidden states, for long-context modeling. The following plots show our model trained from Books scale better (from 125M to 1.3B)
20
266
2K
Introducing Bunny-VisionPro: Our system delivers immersive robot control with both visual and haptic feedback. Using VisionPro and low-cost finger cots with vibration motors, operators can control robots intuitively and immersively, similar to VR gaming. https://t.co/KjhEEvpg3u
Tesla Optimus can arrange batteries in their factories, ours can do skincare (on @QinYuzhe)! We opensource Bunny-VisionPro, a teleoperation system for bimanual hand manipulation. The users can control the robot hands in real time using VisionPro, flexible like a bunny. 🐇
4
26
100
New work led by @imgeorgiev! We show that TD-MPC2 world models can be used as differentiable simulators in a multi-task setting, and it even beats an *actual* differentiable simulator. Super fun collaboration and the results are really promising!
🔔New Paper - PWM: Policy Learning with Large World Models Joint work with @VarunGiridhar3 @ncklashansen @animesh_garg PWM is a multi-task RL method which solves 80 tasks across different embodiments in <10m per task using world models and first-order gradient optimization🧵
0
2
51
Introducing Open-TeleVision: https://t.co/tm4exWTXsL with Fully Autonomous policy video👇. We can conduct a long-horizon task with inserting 12 cans nonstop without any interruptions. We offer: 🤖 Highly precise and smooth bimanual manipulation. 📺 Active egocentric vision (with
3
32
176
Introduce Open-𝐓𝐞𝐥𝐞𝐕𝐢𝐬𝐢𝐨𝐧🤖: We need an intuitive and remote teleoperation interface to collect more robot data. 𝐓𝐞𝐥𝐞𝐕𝐢𝐬𝐢𝐨𝐧 lets you immersively operate a robot even if you are 3000 miles away, like in the movie 𝘈𝘷𝘢𝘵𝘢𝘳. Open-sourced!
47
229
1K
Extremely lucky to host @ncklashansen next wednesday at our paper reading group. It's open to anyone ;) He will present his powerful method for Reinforcement Learning on real and sim robots. It's based on a World Model. So advances in this space like Prism-1 by @wayve_ai can
For our next #LeRobot Paper Discussion, @ncklashansen will present his first authored TD-MPC, and beyond. The event starts at 4:30 PM GMT on 26 June and will run for 1 hr. Join us then at https://t.co/jD77hjAf4u 🤗
0
1
18
Stop by to learn more about our latest work on world models!
For our next #LeRobot Paper Discussion, @ncklashansen will present his first authored TD-MPC, and beyond. The event starts at 4:30 PM GMT on 26 June and will run for 1 hr. Join us then at https://t.co/jD77hjAf4u 🤗
0
1
11
I'm giving a talk about our work on world models this Thursday, zoom link is on the website for anyone interested!
0
4
27
We have a new ICML paper! Adaptive Horizon Actor Critic (AHAC). Joint work with @krishpopdesu @xujie7979 @eric_heiden @animesh_garg AHAC is a first-order model-based RL algorithm that learns high-dimensional tasks in minutes and outperforms PPO by 40%. 🧵(1/4)
4
66
371
🌟Introducing "🤖SpatialRGPT: Grounded Spatial Reasoning in Vision Language Model" https://t.co/Yaj2K91Yx8 SpatialRGPT is a powerful region-level VLM that can understand both 2D and 3D spatial arrangements. It can process any region proposal (e.g., boxes or masks) and provide
11
111
470
Will 2024 finally be the year that I move on from gym==0.21?
3
0
18
Pupeteer: hierarchical world model for humanoid control.
🥳Excited to share: Hierarchical World Models as Visual Whole-Body Humanoid Controllers Joint work with @jyothir_s_v @vlad_is_ai @ylecun @xiaolonw @haosu_twitr Our method, Puppeteer, learns high-dim humanoid policies that look natural, in an entirely data-driven way! 🧵👇(1/n)
18
37
480
High-level "puppeteer" + low-level "tracking" models = Visual Whole-Body Control with Natural Behaviors 🔥Do check it out if you are interested in humanoid control!
🥳Excited to share: Hierarchical World Models as Visual Whole-Body Humanoid Controllers Joint work with @jyothir_s_v @vlad_is_ai @ylecun @xiaolonw @haosu_twitr Our method, Puppeteer, learns high-dim humanoid policies that look natural, in an entirely data-driven way! 🧵👇(1/n)
0
2
17
TD-MPC2 now is applied to Visual Whole-Body Control for Humanoids! Our hierarchical TD-MPC2 generates more natural motion with skill imitation, but also look at how funny raw TD-MPC2 results are.
🥳Excited to share: Hierarchical World Models as Visual Whole-Body Humanoid Controllers Joint work with @jyothir_s_v @vlad_is_ai @ylecun @xiaolonw @haosu_twitr Our method, Puppeteer, learns high-dim humanoid policies that look natural, in an entirely data-driven way! 🧵👇(1/n)
0
3
35
Exciting new work from Nicklas @ncklashansen to extend the model-based RL to hierarchical settings. We may expect more powerful capability for the model-based approach.
🥳Excited to share: Hierarchical World Models as Visual Whole-Body Humanoid Controllers Joint work with @jyothir_s_v @vlad_is_ai @ylecun @xiaolonw @haosu_twitr Our method, Puppeteer, learns high-dim humanoid policies that look natural, in an entirely data-driven way! 🧵👇(1/n)
0
6
22
Both method and environment code is available at https://t.co/VOE4JFjxxk so if you're interested in RL or humanoids be sure to check it out 🤖 It takes ~4 days to train a hierarchical world model with visual inputs on a single RTX 3090 GPU. (7/n)
github.com
Code for "Hierarchical World Models as Visual Whole-Body Humanoid Controllers" - nicklashansen/puppeteer
2
0
17