PINE-Lab-NTU
@NtuLab38456
Followers
21
Following
0
Media
7
Statuses
8
The first comprehensive survey of RL-VLA๏ผ A Survey on Reinforcement Learning of Vision-Language-Action Models for Robotic Manipulation๏ผ ๐๏ผ https://t.co/OgoVXe2xaH ๐ฉ Explain how RL enhances VLA systems by introducing reward driven exploration and self corrective behavior,
github.com
A Survey on Reinforcement Learning of Vision-Language-Action Models for Robotic Manipulation - Denghaoyuan123/Awesome-RL-VLA
0
0
1
๐ค Join the RoCo Challenge @ AAAI 2026! ๐ The RoCo Challenge 2026 invites researchers and innovators worldwide to explore the future of Human-Robot Collaboration (HRC). Hosted during AAAI 2026, this competitionโjointly organized by Nanyang Technological University (NTU),
0
3
8
@ICCV2025 paper! ๐ Meet AnyBimanual: a plug-and-play way to transfer any pretrained unimanual policy into a general bimanual policy with just a few demos. ๐ง Skill Manager: dynamically schedules reusable unimanual skill primitives for each arm ๐ Visual Aligner: soft-masking
0
0
1
@ICCV2025 paper! ๐ Meet GWM: Towards Scalable Gaussian World Models for Robotic Manipulation ๐ 3D Gaussian splats โ scalable, geometry-aware world modeling ๐ฅ Action-conditioned future imagination with diffusion transformers ๐ค Serves as both a strong encoder for imitation
0
0
0
๐ New @IROS2025 paper! ใ Embodied Instruction Following in Unknown Environments ใ ๐๐ค Tired of LLMs hallucinating actions in unseen spaces? This framework dynamically plans & executes tasks while exploring unknown scenes. ๐ Read more: https://t.co/uGHU2nIPcF
#EmbodiedAI
0
1
0
๐ New @IROS2025 paper! ๐ Meet AnyView: the 3D object detector that adapts to your frames โ 1 or 50, it just works. ๐ฆ No more overfitting to fixed views ๐ค Built for real-world real-time mobile robots โก๏ธ Efficient, scalable, deployable ๐ Paper Link: https://t.co/11i4f8WiOL
0
0
0