Junwei Liang 梁俊卫
@JunweiLiangCMU
Followers
267
Following
151
Media
27
Statuses
63
Assistant Professor @HKUST (GZ) // Ph.D. @CarnegieMellon // NeurIPS Area Chair
Guangzhou, China
Joined September 2015
🚁 𝟯𝗘𝗘𝗗: 𝗚𝗿𝗼𝘂𝗻𝗱 𝗘𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴 𝗘𝘃𝗲𝗿𝘆𝘄𝗵𝗲𝗿𝗲 𝗶𝗻 𝟯𝗗 Excited to share that our new dataset has been accepted to #NeurIPS2025 DB Track! 𝟯𝗘𝗘𝗗 establishes the first 𝗺𝘂𝗹𝘁𝗶-𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺, 𝗺𝘂𝗹𝘁𝗶-𝗺𝗼𝗱𝗮𝗹 𝟯𝗗 𝗴𝗿𝗼𝘂𝗻𝗱𝗶𝗻𝗴 benchmark for
0
2
25
@tydsh Thanks for sharing! 😂Now I know 20k+ citations does not mean job security...
0
1
35
🏆 𝗥𝗼𝗯𝗼𝗦𝗲𝗻𝘀𝗲 𝟮𝟬𝟮𝟱 𝗪𝗼𝗿𝗸𝘀𝗵𝗼𝗽 & 𝗔𝘄𝗮𝗿𝗱 𝗖𝗲𝗿𝗲𝗺𝗼𝗻𝘆 We are excited to invite you to the RoboSense Challenge 2025 Workshop & Award Ceremony, held in conjunction with hashtag#IROS2025 in Hangzhou 🇨🇳. ※ 𝗗𝗮𝘁𝗲 & 𝗧𝗶𝗺𝗲: October 21, 2025, 𝟮:𝟬𝟬 -
0
0
1
✨ Happy to share some exciting news from CoRL 2025! Our research work, GLOVER, has been awarded Best Paper at the Workshop on Generalizable Priors for Robot Manipulation. 🥇 We are also looking forward to presenting our follow-up, GLOVER++, at the main conference. Huge
1
0
2
Our Humanoid SpeechVLA has much lower latency than the recent demo shown from Tesla’s. See the comparison. Our system runs on the cheapest hardware and compute. #optimus #HumanoidRobots #EmbodiedAI @elonmusk @Benioff
0
0
2
Our CoRL 2025 paper, GLOVER++, tackles a key question: How can we leverage human videos to boost robot manipulation performance and generalization? 🤖 We're excited to release HOVA-500K, a dataset of 500K annotated human object interaction images! It's perfect for training
0
0
3
Excited to share one of our embodied AI papers accepted at #CoRL2025! 🎉 OmniPerception introduces a high-fidelity LiDAR simulation plugin and a LiDAR-based agile obstacle avoidance locomotion RL model. It's sim-to-real ready, with impressive results on quadruped & humanoid
0
0
7
🎉 Excited to announce that 𝗧𝗵𝗲 𝗥𝗼𝗯𝗼𝗦𝗲𝗻𝘀𝗲 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 is officially launching this June! 🌐 𝗪𝗲𝗯𝘀𝗶𝘁𝗲: https://t.co/9gQurA0U3Z 💰 𝗧𝗼𝘁𝗮𝗹 𝗣𝗿𝗶𝘇𝗲 𝗣𝗼𝗼𝗹: 10,000 USD 🧠 The challenge features 𝗳𝗶𝘃𝗲 𝗲𝘅𝗰𝗶𝘁𝗶𝗻𝗴 𝘁𝗿𝗮𝗰𝗸𝘀: 1️⃣ Driving with
1
10
35
Introducing agile collision avoidance using reinforcement learning directly with LiDAR: We have developed a tool that enables efficient and realistic LiDAR simulation in Isaac Sim, allowing for the training of a locomotion policy with omnidirectional collision avoidance
0
0
4
🔥 Thrilled to share that two of our submissions to #CVPR2025 were accepted. Our work is about embodied AI, including robotic manipulation and 3D visual grounding. 1. HR-Align (robotic manipulation) 🤖 A method to improve robot manipulation by utilizing human data! To scale up
0
0
6
Excited to share our recent work at ICRA 2025 on social navigation! We’ve developed a dataset to evaluate how well robots anticipate human movement—helping them avoid stepping into someone’s future path or intruding on their personal space. https://t.co/Y4k8Xk0zfl
0
0
3
Glad to share our recent work at #Corl2024: Contrastive Imitation Learning for Language-guided Multi-Task Robotic Manipulation 🔔 Code, demo videos and models: https://t.co/2WAHRx1YbR We propose an effective contrastive imitation learning framework! #Robotics #manipulation
0
0
4
Glad to share our recent work at #ECCV2024 on zero-shot visual navigation! Code, the new InstanceNav dataset and models are out: Prioritized Semantic Learning for Zero-shot Instance Navigation 🔔 Code, dataset and models: https://t.co/3npixy3c4x
#embodiedAI #navigation
0
0
1
Glad to share our recent work at #ECCV2024: Open-Vocabulary 3D Semantic Segmentation with Text-to-Image Diffusion Models Project Page: https://t.co/XOqwQtjJTr Our method does not require any labeled 3D data, yet it can perform visual grounding based on arbitrary text prompts.
0
0
1
Survey: Would you pay $50k for a robot that can play tennis with you? Let's sweeten the deal: What about a robot that can also do laundry? :D This is an ongoing whole-body control summer project w/ Zifan, Teli & Jinhui. Stay tuned! #robotics #tennis #hkust #embodiedai
0
0
0
Call For Papers: The 5th Precognition Workshop @CVPR2023 If you are interested in vision-based forecasting research, consider submitting/participating in our workshop! Website: https://t.co/AOlepOyHCJ CFP in CN: https://t.co/a5ZxKzlxYo
#CVPR2023 #computervision #forecasting
0
0
0
In comparison, I, as a just-started assistant professor at a university, can only get a cluster of about 40 GPUs for my group...
@daveg Just FAIR (which constitutes a good chunk of the 'R' part) is about 600 scientists and engineers. The computing facilities include two GPU clusters, the largest one of which has 16,000 GPUs. Do the math. Also huge investments in AI development. Way more than in Research.
1
0
9