Junfeng Long
@junfeng_long
Followers
160
Following
39
Media
2
Statuses
35
Ph.D. student @UCBerkeley. Working on humanoid robot and reinforcement learning.
Berkeley, CA
Joined July 2019
AMP still has its privilege in the motion tracking era!
⚽️ We create a humanoid goalkeeper ! 🥅One-stage RL training ⏰Fully autonomous & real-time 📷Alternative perception: MoCap ↔️ onboard camera 🔁 Generalizes to ball grabbing, squat & jump escapes website: https://t.co/yBFT5xmiMQ paper: https://t.co/prx9qaH3ej
0
2
33
💡 How can humanoids learn adaptable skills from a single human motion? 🤖 Introducing AdaMimic: Towards Adaptable Humanoid Control via Adaptive Motion Tracking Paper: https://t.co/fJBxNPZKeF Website: https://t.co/nVPh3yZKrf
7
30
152
Humanoid motion tracking performance is greatly determined by retargeting quality! Introducing 𝗢𝗺𝗻𝗶𝗥𝗲𝘁𝗮𝗿𝗴𝗲𝘁🎯, generating high-quality interaction-preserving data from human motions for learning complex humanoid skills with 𝗺𝗶𝗻𝗶𝗺𝗮𝗹 RL: - 5 rewards, - 4 DR
30
155
661
😍
0
0
0
How do we learn motor skills directly in the real world? Think about learning to ride a bike—parents might be there to give you hands-on guidance.🚲 Can we apply this same idea to robots? Introducing Robot-Trains-Robot (RTR): a new framework for real-world humanoid learning.
16
36
186
Researchers from RAI Institute present Diffuse-CLoC, a new control policy that fuses kinematic motion diffusion models with physics-based control to produce motions that are both physically realistic and precisely controllable. This breakthrough moves us closer to developing
2
41
172
Excited to open-source GMR: General Motion Retargeting. Real-time human-to-humanoid retargeting on your laptop. Supports diverse motion formats & robots. Unlock whole-body humanoid teleoperation (e.g., TWIST). video with 🔊
22
114
699
Humanoid robots should not be black boxes 🔒 or budget-busters 💸! Meet Berkeley Humanoid Lite! ▹ 100% open source & under $5k ▹ Prints on entry-level 3D printers—break it? fix it! ▹ Modular cycloidal-gear actuators—hack & customize towards your own need ▹ Off-the-shelf
16
93
433
Atlas is demonstrating reinforcement learning policies developed using a motion capture suit. This demonstration was developed in partnership with Boston Dynamics and @rai_inst.
854
5K
20K
Very nice approach for teleoperation! Looking forward to trying it!
🫰Thrilled to introduce HOMIE: Humanoid Loco-Manipulation with Isomorphic Exoskeleton Cockpit. Website: https://t.co/1LvQZmEVLf Code: https://t.co/cdN65klW5k YouTube: https://t.co/jhrdoE67pR 😀 HOMIE consists of a novel RL-based training framework and a self-designed hardware
0
0
3
Great work by Huayi! New step in perceptive humanoid locomotion!
💡 Can a humanoid robot learn to traverse sparse footholds like stepping stones and balancing beams with agility? 🤖 Introducing BeamDojo: Learning Agile Humanoid Locomotion on Sparse Footholds Paper: https://t.co/zUAloQEVzU Website: https://t.co/MQAvnpOdCW
0
0
5
💡Can a humanoid robot learn to stand up across diverse real-world scenarios from scratch? 🤖 Introducing HoST: Learning Humanoid Standing-up Control across Diverse Postures Website: https://t.co/BExxVLpT5C
13
73
318
🚀 Can we make a humanoid move like Cristiano Ronaldo, LeBron James and Kobe Byrant? YES! 🤖 Introducing ASAP: Aligning Simulation and Real-World Physics for Learning Agile Humanoid Whole-Body Skills Website: https://t.co/XQga7tIfdw Code: https://t.co/NpEeJtVxpp
46
202
1K
Excited to introduce the Perceptive Internal Model (PIM) for Humanoid Robots! A perceptive follow up of the HIMLoco work on humanoid robots. Big thanks to my coauthors: Junli Ren, Moji Shi, Zirui Wang, Tao Huang, Ping Luo and @pangjiangmiao ! The first policy simultaneously for:
1
5
24
Excited to introduce the Perceptive Internal Model (PIM) for Humanoid Robots! The first policy simultaneously for: - Go up and down stairs, jump gaps, and 50cm high platforms. - Indoor and outdoor scenarios. - Unitree H1 and Fourier GR-1 robots. Paper: https://t.co/x1gq0XTBEc
3
21
87
Glad that Learning H-Infinity Locomotion Control receives the best poster award at the #CoRL2024 workshop “LocoLearn: From Bioinspired Gait Generation to Active Perception”!
0
1
6