Zhengyi “Zen” Luo Profile Banner
Zhengyi “Zen” Luo Profile
Zhengyi “Zen” Luo

@zhengyiluo

Followers
1,500
Following
677
Media
99
Statuses
388

PhD student in Robotics/CS @CMU_Robotics | Vision, Robotics, AR & VR | Visiting @RealityLabs , ex-intern @NvidiaAI , @apple , co-creator of @CirkitDesign

Pittsburgh, PA
Joined April 2014
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@zhengyiluo
Zhengyi “Zen” Luo
2 months
#ICLR2024 Spotlight🌟🌟🌟 PULSE: Physics-based Universal humanoid motion Latent SpacE/Representation Code: Site: Paper: As a motion representation, PULSE is low dimensional (32), high coverage (99.8% of
@zhengyiluo
Zhengyi “Zen” Luo
7 months
Now that PHC's code is out... Introducing PULSE: Physics-based Universal motion Latent SpacE: 📜: 🌐: All downstream tasks here use the same pretrained latent space. (1/6)
4
37
150
3
50
175
@zhengyiluo
Zhengyi “Zen” Luo
5 months
You can now ask your simulated humanoid to perform actions, in REAL-TIME 👇🏻 Powered by the amazing EMDM ( @frankzydou , @Alex_wangjingbo , etal) and PHC. EMDM: PHC: Simulation: Isaac Gym
5
56
259
@zhengyiluo
Zhengyi “Zen” Luo
3 months
🤔 Ever wondered if simulation-based animation/avatar learnings can be applied to real humanoid in real-time? 🤖 Introducing H2O (Human2HumanOid): - 🧠 An RL-based human-to-humanoid real-time whole-body teleoperation framework - 💃 Scalable retargeting and training using large
4
55
253
@zhengyiluo
Zhengyi “Zen” Luo
7 months
Now that PHC's code is out... Introducing PULSE: Physics-based Universal motion Latent SpacE: 📜: 🌐: All downstream tasks here use the same pretrained latent space. (1/6)
4
37
150
@zhengyiluo
Zhengyi “Zen” Luo
10 months
PHC has been accepted by ICCV 2023! We aim to develop a physics-based humanoid controller capable of imitating ALL of the motion from the AMASS dataset (almost there🧐), recover from failure state, does NOT use any external forces, all the while supporting real-time use cases!
@_akhaliq
AK
1 year
Perpetual Humanoid Control for Real-time Simulated Avatars abs: paper page: project page:
0
36
178
5
22
141
@zhengyiluo
Zhengyi “Zen” Luo
6 months
Simulated humanoid now learns how to handle a basketball🏀🏀🏀! New work, PhysHOI, led by @NliGjvJbycSeD6t , learns dynamic human objects (basketballs, grabbing, etc). Site🌐: Paper📄: Code/Data🧑🏻‍💻: (coming
0
29
137
@zhengyiluo
Zhengyi “Zen” Luo
1 year
Releasing the Universal Humanoid Controller (UHC) that has been the backbone of many of our physics human pose estimation efforts! Helped by Residual Force Control, UHC can imitate up to 97% of the AMASS dataset using one policy!
2
24
134
@zhengyiluo
Zhengyi “Zen” Luo
4 months
Porting the Perpetual Humanoid Controller (PHC) to MuJoCo for motion imitation, 70% done. I replicated the state space of PHC in isaac to MuJoCo, right now it has reached ~98% success rate for single network (no PNN yet). Code (work in progress) :
3
13
118
@zhengyiluo
Zhengyi “Zen” Luo
5 months
One thing that amazes me about PULSE is that random samples from the latent space can lead to natural motion that is better than anything I have trained for (e.g. a walking task in PACER), even if I uses PULSE's latent space or AMP. So what is the problem? The task reward?
2
15
112
@zhengyiluo
Zhengyi “Zen” Luo
3 months
In preparation for PULSE's code release: Releasing PHC+, motion imitation model that has learned ALL of the training data (11313 AMASS sequences). Available at PHC's codebase 👇🏻
3
23
104
@zhengyiluo
Zhengyi “Zen” Luo
5 months
Webcam demo for PHC is now live. Check it out! Testing on a 3080. Different GPU might result in different results 😃 👇🏻 is a screen recording of real-time test.
1
20
102
@zhengyiluo
Zhengyi “Zen” Luo
7 months
Code for PHC has been released!!! The codebase include: - the SMPL humanoid environment for Isaac Gym - Motion imitation models trained on AMASS - More to come demos based on language & video input
3
21
93
@zhengyiluo
Zhengyi “Zen” Luo
8 months
One policy to learn them all 👀
3
1
91
@zhengyiluo
Zhengyi “Zen” Luo
2 months
I am new to real-world robotics; hope this is not a common occurrence😂
8
5
87
@zhengyiluo
Zhengyi “Zen” Luo
5 months
Language to humanoid control demo, powered by MDM and PHC is out 👀👀👀 Check it out at PHC's repo: (EMDM support once the official code comes out). Simulation runs in real-time, and this demo is as fast as MDM can generate the motion Also added a way
2
11
89
@zhengyiluo
Zhengyi “Zen” Luo
4 months
Introducing SMPL_Sim, a codebase meant to be a **minimal** example for setting up SMPL humanoid in MuJoCo and Isaac Gym, that can be pip installed: It now supports three simple tasks (reach, speed, and getup), work in progress.
4
17
87
@zhengyiluo
Zhengyi “Zen” Luo
7 months
Code for Trace & Pace is now out🎉🎉🎉: Trace ꙍ: - diffusion based trajectory planner/forecaster Pacer🚶: - physics-based trajectory follower
@zhengyiluo
Zhengyi “Zen” Luo
1 year
Trace & Pace is live and will be presented at @CVPR 2023! Project page: See full thread 👇🏻
1
6
30
1
11
79
@zhengyiluo
Zhengyi “Zen” Luo
5 months
Code and models for PhysHOI now available👇🏼🎉🎉
@NliGjvJbycSeD6t
Yinhuai
5 months
Tired of designing task rewards? New work, PhysHOI, enable simulated humanoid to learn diverse basketball skills 🏀 purely from human demonstrations. Code available now! Site: Paper: Code/Data:
3
42
197
0
10
79
@zhengyiluo
Zhengyi “Zen” Luo
4 months
Coming to PHC in 2024 -- hands!
5
4
74
@zhengyiluo
Zhengyi “Zen” Luo
1 month
Reminds me of this🤣 Can we simulate the new atlas accurately? Can’t wait
@BostonDynamics
Boston Dynamics
1 month
We promise this is not a person in a bodysuit.
3K
11K
56K
1
10
75
@zhengyiluo
Zhengyi “Zen” Luo
5 months
Merry ChristmasChristmas!!!🎄 To boost your holiday spirit, here is me (trying to) dancing while controlling a simulated humanoid with Quest 2 as input device (SLAM cameras and headset tracking).
0
13
74
@zhengyiluo
Zhengyi “Zen” Luo
8 months
So I was trying to cite ReLU and this is the first thing that pops up: , with 3000+ citations 😂 Folks this is NOT the ReLU paper! A number of popular, well-known paper has made this mistake it seems... (There is even this website "how to cite relu":
Tweet media one
Tweet media two
7
8
69
@zhengyiluo
Zhengyi “Zen” Luo
2 months
Early footage of motion imitation for the H1 humanoid developed in the PHC codebae. Spoiler: these motion does not translate to real (as of today; not sure if ever).
3
3
63
@zhengyiluo
Zhengyi “Zen” Luo
8 months
There is trippy and there is human motion created by a generative physics-based humanoid controller 👀 🕺🏻💃🏻
2
6
56
@zhengyiluo
Zhengyi “Zen” Luo
5 months
While I am at it, also releasing the trained models for VR controllers tracking using PHC. This task is a essentially a generalized version of motion imitation, where there are only 3 6DOF points to track (red dots) instead of 24. Models are released at
2
10
55
@zhengyiluo
Zhengyi “Zen” Luo
11 months
Come and check out Trace & Pace at Poster 134 this afternoon at #CVPR ! This work aims to create physically realistic pedestrian trajectories and animation: @davrempe will be there to answer all your questions! (I can't be there due to visa reasons🫠)
0
10
48
@zhengyiluo
Zhengyi “Zen” Luo
1 year
Day 355 of trying to teach robot how to walk properly. It has finally lost it.
8
3
43
@zhengyiluo
Zhengyi “Zen” Luo
5 months
Happy New Year 🥳🥳🥳 Here I am, back to one of the most important step in building models, making sure data is clean😂 👇🏻a kinematic MoCap sequence playback. Please let me know if you have better ways to find them, right now I have some crude velocity filtering and visually
4
5
42
@zhengyiluo
Zhengyi “Zen” Luo
5 months
@DrJimFan Gaming leads to GPU which then leads to AI boom in 2012 and now providing virtual words for AIs to learn. Damn gaming really is one hell of an AI accelerator
4
4
42
@zhengyiluo
Zhengyi “Zen” Luo
9 months
Building the video demo part of PHC: controlling simulated humanoid in a real-time, streaming fashion
3
4
40
@zhengyiluo
Zhengyi “Zen” Luo
11 months
Doing some code release over the weekend for Embodied Pose: Added an in-the-wild demo as well. Works decently well, surprisingly. Capturing the global motion relatively well and missing some details (considering it's only trained on synthetic 2D key
0
3
37
@zhengyiluo
Zhengyi “Zen” Luo
9 months
Have I mentioned that PHC can support multi-person interactions? Thanks to Isaac's parallel simulation, PHC can control multiple interacting humanoids out of the box. This nice fencing sequences is from @Me_Rawal 's EgoHumans dataset:
1
6
37
@zhengyiluo
Zhengyi “Zen” Luo
1 year
Recording the CVPR video knowing I might not be able to go since haven’t heard back from my Canada visa application (with booked hotels and flights🥲) @CVPR 🙏🙏🙏
5
0
34
@zhengyiluo
Zhengyi “Zen” Luo
3 years
Excited to share that our paper "Dynamics-Regulated Kinematic Policy for Egocentric Pose Estimation" has been accepted to Neurips 2021!!! Thanks to the team @RHachiuma @KhrylxYe @kkitani #NeurIPS2021 Code and data will be released at:
1
10
32
@zhengyiluo
Zhengyi “Zen” Luo
15 days
Really hope I am at ICLR rn 🫠 (sigh in visa hell) PULSE will be posted at Halle B #160 , Wed 8 May 4:30 p.m. CEST — 6:30 p.m. as Spotlight Poster🌟 You can also check out the poster here 👇 PULSE has been immensely useful for many tasks we are working on🤫, stay tuned for more
Tweet media one
0
9
32
@zhengyiluo
Zhengyi “Zen” Luo
1 year
Trace & Pace is live and will be presented at @CVPR 2023! Project page: See full thread 👇🏻
@davrempe
Davis Rempe
1 year
Excited to announce that our work “Trace & Pace” will be presented @CVPR2023 . We combine guided trajectory diffusion with a physics-based humanoid controller to enable pedestrian animation that is controllable by a user. Project page: 1/5
6
87
321
1
6
30
@zhengyiluo
Zhengyi “Zen” Luo
2 years
Presenting physics-based 3D human pose and human-object interaction estimation at #NeurIPS2021 (D3 vision-1) now (11:30 AM - 1:00 PM ET)! Paper and code at:
Tweet media one
0
10
28
@zhengyiluo
Zhengyi “Zen” Luo
1 year
I will be presenting our new work: “Embodied Scene-aware Human Pose Estimation” at #NeurIPS2022 Thursday Poster 900. In this work, we use third person video🎥, proprioception🕺, and scene information🪑 to drive an embodied agent for pose estimation. 1/5
1
8
29
@zhengyiluo
Zhengyi “Zen” Luo
1 month
Crazy results🔥 On the other hand, all the mocap cameras and markers keep reminding me that it’s not yet possible to get this to work onboard with egocentric vision & sensors 😢 Long way to go💪
@GoogleDeepMind
Google DeepMind
1 month
Soccer players have to master a range of dynamic skills, from turning and kicking to chasing a ball. How could robots do the same? ⚽ We trained our AI agents to demonstrate a range of agile behaviors using reinforcement learning. Here’s how. 🧵
119
538
3K
2
1
28
@zhengyiluo
Zhengyi “Zen” Luo
27 days
Robotics saving AR/VR😤
@xuxin_cheng
Xuxin Cheng
27 days
 🤖Introducing 📺𝗢𝗽𝗲𝗻-𝗧𝗲𝗹𝗲𝗩𝗶𝘀𝗶𝗼𝗻: a web-based teleoperation software!  🌐Open source, cross-platform (VisionPro & Quest) with real-time stereo vision feedback.  🕹️Easy-to-use hand, wrist, head pose streaming. Code:
12
82
352
1
5
27
@zhengyiluo
Zhengyi “Zen” Luo
8 months
PHC will be presented at #ICCV2023 📍Location: Room "Foyer Sud" - 101 ⏰ Thursday 5th 10:30 AM-12:30 PM 🆔: 1900 Humanoid control for avatars, motion imitation on AMASS, and fall-state recovery during imitation. @jinkuncao will be there to present🎉🎉🎉 (no visa for me🥲)
1
6
25
@zhengyiluo
Zhengyi “Zen” Luo
3 months
PACER+, coming in hot 🔥🔥🔥
@Alex_wangjingbo
Jingbo Wang
3 months
The last work during pursuing my PhD degree has been accepted by CVPR 2024. So happy I don’t need to resubmit it to other conferences : )
4
2
55
1
2
24
@zhengyiluo
Zhengyi “Zen” Luo
5 months
Routine funny humanoid motion post👇🏻 task: getup from the ground and reach the red dot pelvis location. Environment: MuJoCo 3.0.1 Humanoid: SMPL Humanoid Algorithm: vanilla PPO with task reward, nothing else.
3
4
22
@zhengyiluo
Zhengyi “Zen” Luo
3 months
That’s some robust motion!!! 🔥🤖🦾congrats!!! @xuxin_cheng @JiYandong @xiaolonw
@xiaolonw
Xiaolong Wang
3 months
Let’s think about humanoid robots outside carrying the box. How about having the humanoid come out the door, interact with humans, and even dance? Introducing Expressive Whole-Body Control for Humanoid Robots: See how our robot performs rich, diverse,
94
214
1K
1
3
19
@zhengyiluo
Zhengyi “Zen” Luo
4 months
@HarryXu12 The SMPL/SMPLx humanoid would fit perfectly. With the codebase I wish to simplify the process to work with them + hands! Using the smplx humanoid would give access to lots of existing mocap with fingers! Works in both Isaac Gym and MuJoCo
1
3
19
@zhengyiluo
Zhengyi “Zen” Luo
10 months
Demo building for the language -> simulation part of PHC
0
2
18
@zhengyiluo
Zhengyi “Zen” Luo
10 months
Found this video from 2021: trying to train a policy to enact the sequences from the GRAB dataset with a humanoid. Fair to say things unraveled pretty quickly 👀
2
1
18
@zhengyiluo
Zhengyi “Zen” Luo
3 months
While working on PULSE, I found out that you CAN train a single MLP to reach very very high imitation success rate on AMASS with the right training procedure. Basically, no MCP/MOE is needed as long as you train it for long enough...
Tweet media one
3
1
17
@zhengyiluo
Zhengyi “Zen” Luo
1 year
@soumithchintala @NumFOCUS Can’t imagine AI today without numpy/jupyter/…!
Tweet media one
0
0
18
@zhengyiluo
Zhengyi “Zen” Luo
3 years
Introducing “Dynamics-Regulated Kinematic Policy for Egocentric Pose Estimation”! From just a front-facing video, we control a simulated character to recover physically plausible global pose and human-object interaction: (1/3)
1
6
16
@zhengyiluo
Zhengyi “Zen” Luo
4 years
Our work, 3D Human Motion Estimation via Motion Compression and Refinement, has been accepted to ACCV 2020 (Oral)! We focus on extracting stable and natural-looking human motion: Check out our demo: (1/2)
2
5
16
@zhengyiluo
Zhengyi “Zen” Luo
1 year
MeTRAbs and it’s follow up is *the* best pose/keypoint estimator I have used; completely blew my mind, and it’s in real time. Recommending it constantly to ppl these days. (Just hope it can one day be in PyTorch🥲)
@Istvan_Sarandi
István Sárándi
1 year
En route to #WACV2023 ! I'll present a paper on extreme multi-dataset learning of 3D human pose estimation when labels have different skeleton formats. Paper: Project page:
4
20
149
2
2
14
@zhengyiluo
Zhengyi “Zen” Luo
1 year
Wow that’s a lot of iPads🤯 #NeurIPS22
0
0
14
@zhengyiluo
Zhengyi “Zen” Luo
10 months
Task: reach certain speed; result: 👇🏼
1
1
12
@zhengyiluo
Zhengyi “Zen” Luo
1 year
Happy 2023! Trying to get started with shrinking down my enormous paper reading list and improving some tooling: The new stage manager on iPadOS is fantastic for reading & note-taking, allowing apps to reside upper and lower 2/3 of the screen and quickly switch in between!
1
0
12
@zhengyiluo
Zhengyi “Zen” Luo
10 days
a humanoid price close to a fully spec’d out Mac Pro😂
@UnitreeRobotics
Unitree
10 days
Unitree Introducing | Unitree G1 Humanoid Agent | AI Avatar Price from $16K 🤩 Unlock unlimited sports potential(Extra large joint movement angle, 23~34 joints) Force control of dexterous hands, manipulation of all things Imitation & reinforcement learning driven #Unitree #AI
310
740
3K
0
0
13
@zhengyiluo
Zhengyi “Zen” Luo
7 months
Gotta try this; be right back Hope Jax doesn’t have too steep of a learning curve
@GoogleDeepMind
Google DeepMind
7 months
Introducing MuJoCo 3.0: a major new release of our fast, powerful and open source tool for robotics research. 🤖 📈 GPU & TPU acceleration through #JAX 🖼️ Better simulation of more diverse objects - like clothes, screws, gears and donuts 💡 Find out more:
18
306
1K
1
0
12
@zhengyiluo
Zhengyi “Zen” Luo
3 months
Gosh. What's the secret sauce... I gots to know
@OpenAI
OpenAI
3 months
Introducing Sora, our text-to-video model. Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. Prompt: “Beautiful, snowy
10K
33K
140K
3
0
10
@zhengyiluo
Zhengyi “Zen” Luo
2 years
Another day another chance to recommend Paperpile @paperpile as the best paper reading and management tool🤩 its web importer on chrome and iOS are especially wicked. iPad app is also fantastic.
1
2
11
@zhengyiluo
Zhengyi “Zen” Luo
5 months
I can’t even do some of the motion that good in simulation yet 😭😭😭 (e.g the egg part)
@Tesla_Optimus
Tesla Optimus
5 months
There’s a new bot in town 🤖 Check this out (until the very end)!
3K
7K
31K
1
0
10
@zhengyiluo
Zhengyi “Zen” Luo
1 year
I need this poster tech—no more tubes! Emailing the authors right now…
1
0
9
@zhengyiluo
Zhengyi “Zen” Luo
5 months
Here is some more
2
2
9
@zhengyiluo
Zhengyi “Zen” Luo
5 months
I have wanted to achieve this effect for a long time; now that Diffusion model has elevated to 0.1s level speed 🙀 and the humanoid controller is robust enough to deal with imperfect transitions and noisy input, it has finally become possible.
4
2
8
@zhengyiluo
Zhengyi “Zen” Luo
2 months
Expert Data 😤
@ego4_d
Ego4D
2 months
The full release of Ego-Exo4D is out! 1.3k hours of first and third-person videos + the world's largest source of egocentric body/hand pose estimates and video segmentation masks. Gaze, trajectories, and point clouds in 99% of data. Access below 👇
4
28
118
0
2
7
@zhengyiluo
Zhengyi “Zen” Luo
1 year
...UHC supports the SMPL/SMPL-H/SMPL-X human bodies, and simulates in real time in MuJoCo. Here are some of the pose estimation efforts using UHC:
0
1
8
@zhengyiluo
Zhengyi “Zen” Luo
4 months
Exciting stuff from @HaoyuXiong1 ! Learning stuff online will be huge!
@Haoyu_Xiong_
Haoyu Xiong
4 months
Introducing Open-World Mobile Manipulation 🦾🌍 – A full-stack approach for operating articulated objects in open-ended unstructured environments: Unlocking doors with lever handles/ round knobs/ spring-loaded hinges 🔓🚪 Opening cabinets, drawers, and refrigerators 🗄️ 👇
30
103
782
0
2
6
@zhengyiluo
Zhengyi “Zen” Luo
13 days
So…… where can I buy a comfortable strap for Vision Pro? Been waiting for this for a long time (was an intern at Apple in 2018) and would really love to use this for work. ….. but my experience is basically: At first: wow this is so cool so useful 5 mins later: get this
Tweet media one
8
0
7
@zhengyiluo
Zhengyi “Zen” Luo
1 year
Today I look at my one-year-old code and wonder: how did I get this to work this looks pretty impossible 🫠
0
0
7
@zhengyiluo
Zhengyi “Zen” Luo
1 year
@madiator Are you sure it’s not a bug?
0
0
7
@zhengyiluo
Zhengyi “Zen” Luo
1 year
Thanks to @xbpeng4 @KhrylxYe @orlitany and @FidlerSanja for a wonderful summer at @NVIDIAAI , and special thanks for @davrempe for carrying me as co-lead through the process!🙏🏻 Learned so much🙏🏻 Here is some funny failure cases: 😁
0
1
6
@zhengyiluo
Zhengyi “Zen” Luo
2 years
Wow I guess I am verified now?😂 (had Twitter blue before so just a up in price)
0
0
7
@zhengyiluo
Zhengyi “Zen” Luo
1 year
Trying the ChatGPT Siri shortcut from Already pretty good experience even if it can only answer in text (no iOS integration), especially when driving!! When I asked about Joel and Ellie in #LastOfUsHBO , the text to speech could
1
0
6
@zhengyiluo
Zhengyi “Zen” Luo
1 year
Scrambling to try to get my code released before going to #NeurIPS2022
0
0
5
@zhengyiluo
Zhengyi “Zen” Luo
2 years
Professor Mintz was my academic advisor at @Penn and each semester I schedule a session to receive some "tough love" -- unfiltered evaluation of my academic decisions. His advice and wisdom still ring true in my ears--"Learn more Math Zen! More Math!" RIP Professor Mintz.
@LiamDugan_
Liam Dugan
2 years
Prof. Mintz gave the very first lecture I ever attended at @Penn . He tried teaching quantum computing to incoming freshmen — during orientation! Needless to say many pieces of chalk were broken that day. Couldn’t have asked for a better intro to Penn. RIP to a great professor
0
0
8
0
0
5
@zhengyiluo
Zhengyi “Zen” Luo
4 months
My introduction to AR/VR was the Google Cardboard and Hololens in 2015/16. It's finally starting. I gotta find a way to try this...
@Casey
Casey Neistat
4 months
Vision Pro isn't just great, it's the single greatest piece of tech ive ever used
2K
5K
46K
1
1
4
@zhengyiluo
Zhengyi “Zen” Luo
7 months
Bonus question: could this lead to a foundational model for Humanoid Control? PULSE can randomly generate motion from noise and trained to perform different tasks using a sampler. So.....?
1
0
5
@zhengyiluo
Zhengyi “Zen” Luo
7 months
@Michael_J_Black Thanks Michael!!! Your encouragement means a lot 🙏🙏🙏Thanks to SMPL and AMASS!
0
0
2
@zhengyiluo
Zhengyi “Zen” Luo
1 year
@xbpeng4 Can do! Couldn't really find a spinkick like the one in DeepMimic; how about a "spin and kick" plus some cartwheeling? you can see the residual force working extra hard on these haha
1
0
5
@zhengyiluo
Zhengyi “Zen” Luo
2 years
Three way mouse & keyboard sharing between Mac, iPad Pro, and Linux! (Powered by Synergy Pro + Universal Control)
0
0
5
@zhengyiluo
Zhengyi “Zen” Luo
1 year
Guys it's happening... I am using Bing & Edge🤯
Tweet media one
0
0
5
@zhengyiluo
Zhengyi “Zen” Luo
2 months
To checkout what is transferable to the real robot, checkout our work H2O: Threads:
@zhengyiluo
Zhengyi “Zen” Luo
3 months
🤔 Ever wondered if simulation-based animation/avatar learnings can be applied to real humanoid in real-time? 🤖 Introducing H2O (Human2HumanOid): - 🧠 An RL-based human-to-humanoid real-time whole-body teleoperation framework - 💃 Scalable retargeting and training using large
4
55
253
0
1
4
@zhengyiluo
Zhengyi “Zen” Luo
1 year
Using TRAjectory Diffusion Model for Controllable PEdestrians (TRACE) by @davrempe as the trajectory planner, we enable a large crowd simulation framework.
1
0
4
@zhengyiluo
Zhengyi “Zen” Luo
5 months
@peabody124 @DrLaschowski @rlfromlux Thanks for inviting! It was a great time sharing my work. Bio-mechanics humanoids from @MyoSuite is the real deal. Exciting times.
0
0
4
@zhengyiluo
Zhengyi “Zen” Luo
2 years
@soumithchintala A very simple one I made It streams the VIO + video result through webscoket and mjpeg to a host machine server (made in python + aiohttp). I was able to use it to collect (sort of) Head-mounted AR data for an egocentric pose estimation project. Like this:
Tweet media one
0
1
3
@zhengyiluo
Zhengyi “Zen” Luo
7 months
@DrJimFan I somehow would still prefer the glass form-factor; would seem more natural to interact with and not an additional device to carry around. A glass with similar UX, even the same laser display on hand would be pretty cool.
0
0
4
@zhengyiluo
Zhengyi “Zen” Luo
3 months
PHC+ bumps up the model size, refined the hard negative mining process (most important part), and trained for a LONG time. The released model are a little imperfect (e.g. the rotation + keypoint model doesn't have the full walk back naturally behavior) due to time constraint
1
0
2
@zhengyiluo
Zhengyi “Zen” Luo
7 months
3. The motor skills from the latent space should be able to extrapolate to unseen scenarios. Here we show policy trained using PULSE can handle complex terrain traversal using human-like behavior, using only trajectory following reward (no additional adversarial reward) (4/6)
1
1
4
@zhengyiluo
Zhengyi “Zen” Luo
7 months
The key features of PULSE: 1. Once the latent space is learned, randomly sampled latents create stable and human-like behavior (instead of random jitters) -- better downstream exploration. Here we visualize training for a "reach" and "move forward with speed" tasks. (2/6)
1
0
4
@zhengyiluo
Zhengyi “Zen” Luo
1 year
@Istvan_Sarandi 😍😍😍looking forward to it!!! Can’t imagine how much work it is to process all those datasets.
0
0
4
@zhengyiluo
Zhengyi “Zen” Luo
3 years
@2plus2make5 works in overleaf pretty well, and the basic Function (spell check) is also free like Grammarly.
1
0
4
@zhengyiluo
Zhengyi “Zen” Luo
3 years
🔥🔥🔥
@ericaxweng
Erica Weng
3 years
PhD application season yet again... I wrote a SoP how-to-guide / template for anyone who doesn't know where to start. I'm continuously adding to it new example quotes from the SoPs I review. hope it helps!
4
14
113
0
0
4
@zhengyiluo
Zhengyi “Zen” Luo
10 months
Code and trained models coming soon! Work done at @RealityLabs Pittsburgh. Much thanks to the team! @jinkuncao @awinkler_ @kkitani @xuweipeng000
0
0
4
@zhengyiluo
Zhengyi “Zen” Luo
3 years
Is there a transformer paper named “Optimus Prime” or “Prime” yet? Asking for a friend.
0
0
3
@zhengyiluo
Zhengyi “Zen” Luo
1 year
Come on!!! #WWDC23
0
0
3
@zhengyiluo
Zhengyi “Zen” Luo
3 months
🎉🎉🎉
@DrJimFan
Jim Fan
3 months
Career update: I am co-founding a new research group called "GEAR" at NVIDIA, with my long-time friend and collaborator Prof. @yukez . GEAR stands for Generalist Embodied Agent Research. We believe in a future where every machine that moves will be autonomous, and robots and
Tweet media one
241
471
4K
0
0
3
@zhengyiluo
Zhengyi “Zen” Luo
2 months
@zhaomingxie These can’t haha. The ones in H2O are more plausible. We use an imitator similar to this one to filter out the implausible ones.
0
0
2