ncklashansen Profile Banner
Nicklas Hansen Profile
Nicklas Hansen

@ncklashansen

Followers
2K
Following
1K
Media
51
Statuses
292

PhD candidate @UCSanDiego. Prev: @nvidia, @MetaAI, @UCBerkeley, @DTU_Compute. I like robots 🤖, plants 🪴, and they/them pronouns 🏳️‍🌈

San Diego, CA
Joined August 2018
Don't wanna be here? Send us removal request.
@ncklashansen
Nicklas Hansen
1 year
🥳Excited to share: Hierarchical World Models as Visual Whole-Body Humanoid Controllers Joint work with @jyothir_s_v @vlad_is_ai @ylecun @xiaolonw @haosu_twitr Our method, Puppeteer, learns high-dim humanoid policies that look natural, in an entirely data-driven way! 🧵👇(1/n)
12
64
379
@NVIDIARobotics
NVIDIA Robotics
8 months
See Nicklas Hansen's amazing work on robot perception, planning, and action in dynamic environments (NVIDIA Graduate Research Fellow and PhD candidate at @UCSanDiego). ✨🤖 #NationalRoboticsWeek Learn more about Nicklas's work. ➡️ https://t.co/1dXc4nRRfT
9
24
127
@ncklashansen
Nicklas Hansen
1 year
TD-MPC2 just got ~5x faster! The update is available in a separate "speedups" branch at the moment and will be merged into main soon. Special thanks to @VincentMoens for his help with this!
3
7
83
@RemiCadene
Remi Cadene
1 year
The third @LeRobotHF research presentation on TDMPC 1 and 2 by @ncklashansen is available on youtube: https://t.co/SxPSOMSkB5 Don't miss Nicklas' latest projects on https://t.co/dVhCjKL9Bo Project page: https://t.co/RKmdin4Sz2 Diffusion Policy paper: https://t.co/Y05CCF7uhN
1
10
78
@ncklashansen
Nicklas Hansen
1 year
New fun benchmark for RL researchers to stress test their algorithms! We release code + human driver data for imitation, and benchmark TD-MPC2 in a multi-track setting 🏎️💨
@xiaolonw
Xiaolong Wang
1 year
Besides reading cool papers, my Twitter account is mostly used for catching up @F1 news. Very excited about @LandoNorris 's great performance recently. Now we combine Formula Racing to AI research: The following video shows we train a Reinforcement Learning policy to drive a
2
6
64
@xiaolonw
Xiaolong Wang
1 year
Cannot believe this finally happened! Over the last 1.5 years, we have been developing a new LLM architecture, with linear complexity and expressive hidden states, for long-context modeling. The following plots show our model trained from Books scale better (from 125M to 1.3B)
20
266
2K
@dngxngxng3
Runyu Ding
1 year
Introducing Bunny-VisionPro: Our system delivers immersive robot control with both visual and haptic feedback. Using VisionPro and low-cost finger cots with vibration motors, operators can control robots intuitively and immersively, similar to VR gaming. https://t.co/KjhEEvpg3u
@xiaolonw
Xiaolong Wang
2 years
Tesla Optimus can arrange batteries in their factories, ours can do skincare (on @QinYuzhe)! We opensource Bunny-VisionPro, a teleoperation system for bimanual hand manipulation. The users can control the robot hands in real time using VisionPro, flexible like a bunny. 🐇
4
26
100
@ncklashansen
Nicklas Hansen
1 year
New work led by @imgeorgiev! We show that TD-MPC2 world models can be used as differentiable simulators in a multi-task setting, and it even beats an *actual* differentiable simulator. Super fun collaboration and the results are really promising!
@imgeorgiev
Ignat Georgiev
1 year
🔔New Paper - PWM: Policy Learning with Large World Models Joint work with @VarunGiridhar3 @ncklashansen @animesh_garg PWM is a multi-task RL method which solves 80 tasks across different embodiments in <10m per task using world models and first-order gradient optimization🧵
0
2
51
@xiaolonw
Xiaolong Wang
1 year
Introducing Open-TeleVision: https://t.co/tm4exWTXsL with Fully Autonomous policy video👇. We can conduct a long-horizon task with inserting 12 cans nonstop without any interruptions. We offer: 🤖 Highly precise and smooth bimanual manipulation. 📺 Active egocentric vision (with
3
32
176
@xuxin_cheng
Xuxin Cheng
1 year
Introduce Open-𝐓𝐞𝐥𝐞𝐕𝐢𝐬𝐢𝐨𝐧🤖: ⁣ We need an intuitive and remote teleoperation interface to collect more robot data. 𝐓𝐞𝐥𝐞𝐕𝐢𝐬𝐢𝐨𝐧 lets you immersively operate a robot even if you are 3000 miles away, like in the movie 𝘈𝘷𝘢𝘵𝘢𝘳. Open-sourced!
47
229
1K
@RemiCadene
Remi Cadene
1 year
Extremely lucky to host @ncklashansen next wednesday at our paper reading group. It's open to anyone ;) He will present his powerful method for Reinforcement Learning on real and sim robots. It's based on a World Model. So advances in this space like Prism-1 by @wayve_ai can
@asoare159
Alexander Soare
1 year
For our next #LeRobot Paper Discussion, @ncklashansen will present his first authored TD-MPC, and beyond. The event starts at 4:30 PM GMT on 26 June and will run for 1 hr. Join us then at https://t.co/jD77hjAf4u 🤗
0
1
18
@ncklashansen
Nicklas Hansen
1 year
Stop by to learn more about our latest work on world models!
@asoare159
Alexander Soare
1 year
For our next #LeRobot Paper Discussion, @ncklashansen will present his first authored TD-MPC, and beyond. The event starts at 4:30 PM GMT on 26 June and will run for 1 hr. Join us then at https://t.co/jD77hjAf4u 🤗
0
1
11
@ncklashansen
Nicklas Hansen
1 year
I'm giving a talk about our work on world models this Thursday, zoom link is on the website for anyone interested!
0
4
27
@imgeorgiev
Ignat Georgiev
1 year
We have a new ICML paper! Adaptive Horizon Actor Critic (AHAC). Joint work with @krishpopdesu @xujie7979 @eric_heiden @animesh_garg AHAC is a first-order model-based RL algorithm that learns high-dimensional tasks in minutes and outperforms PPO by 40%. 🧵(1/4)
4
66
371
@anjjei
An-Chieh Cheng
1 year
🌟Introducing "🤖SpatialRGPT: Grounded Spatial Reasoning in Vision Language Model" https://t.co/Yaj2K91Yx8 SpatialRGPT is a powerful region-level VLM that can understand both 2D and 3D spatial arrangements. It can process any region proposal (e.g., boxes or masks) and provide
11
111
470
@ncklashansen
Nicklas Hansen
1 year
Will 2024 finally be the year that I move on from gym==0.21?
3
0
18
@ylecun
Yann LeCun
1 year
Pupeteer: hierarchical world model for humanoid control.
@ncklashansen
Nicklas Hansen
1 year
🥳Excited to share: Hierarchical World Models as Visual Whole-Body Humanoid Controllers Joint work with @jyothir_s_v @vlad_is_ai @ylecun @xiaolonw @haosu_twitr Our method, Puppeteer, learns high-dim humanoid policies that look natural, in an entirely data-driven way! 🧵👇(1/n)
18
37
480
@tongzhou_mu
Tongzhou Mu 🤖🦾🦿
1 year
High-level "puppeteer" + low-level "tracking" models = Visual Whole-Body Control with Natural Behaviors 🔥Do check it out if you are interested in humanoid control!
@ncklashansen
Nicklas Hansen
1 year
🥳Excited to share: Hierarchical World Models as Visual Whole-Body Humanoid Controllers Joint work with @jyothir_s_v @vlad_is_ai @ylecun @xiaolonw @haosu_twitr Our method, Puppeteer, learns high-dim humanoid policies that look natural, in an entirely data-driven way! 🧵👇(1/n)
0
2
17
@xiaolonw
Xiaolong Wang
1 year
TD-MPC2 now is applied to Visual Whole-Body Control for Humanoids! Our hierarchical TD-MPC2 generates more natural motion with skill imitation, but also look at how funny raw TD-MPC2 results are.
@ncklashansen
Nicklas Hansen
1 year
🥳Excited to share: Hierarchical World Models as Visual Whole-Body Humanoid Controllers Joint work with @jyothir_s_v @vlad_is_ai @ylecun @xiaolonw @haosu_twitr Our method, Puppeteer, learns high-dim humanoid policies that look natural, in an entirely data-driven way! 🧵👇(1/n)
0
3
35
@QinYuzhe
Yuzhe Qin
1 year
Exciting new work from Nicklas @ncklashansen to extend the model-based RL to hierarchical settings. We may expect more powerful capability for the model-based approach.
@ncklashansen
Nicklas Hansen
1 year
🥳Excited to share: Hierarchical World Models as Visual Whole-Body Humanoid Controllers Joint work with @jyothir_s_v @vlad_is_ai @ylecun @xiaolonw @haosu_twitr Our method, Puppeteer, learns high-dim humanoid policies that look natural, in an entirely data-driven way! 🧵👇(1/n)
0
6
22
@ncklashansen
Nicklas Hansen
1 year
Both method and environment code is available at https://t.co/VOE4JFjxxk so if you're interested in RL or humanoids be sure to check it out 🤖 It takes ~4 days to train a hierarchical world model with visual inputs on a single RTX 3090 GPU. (7/n)
Tweet card summary image
github.com
Code for "Hierarchical World Models as Visual Whole-Body Humanoid Controllers" - nicklashansen/puppeteer
2
0
17