
Tran Nguyen Le
@trannguyenle95
Followers
62
Following
400
Media
1
Statuses
121
Tenure-track Assistant Prof. in Robotics & Mechatronics @DTUTweet | Postdoc & PhD @AaltoUniversity, Msc @TampereUni. Interested in ML/AI + Robotics.
Copenhagen, Denmark
Joined July 2011
I'll be in London to join the Workshop on Representing and Manipulating Deformable Objects #ICRA2023 and present our work named SPONGE. If you are interested in seeing our attempt to have a robot clean dishes, let's have a chat :)!.More info:
arxiv.org
Planning robotic manipulation tasks, especially those that involve interaction between deformable and rigid objects, is challenging due to the complexity in predicting such interactions. We...
0
0
2
RT @normandipalo: Excited to share Gemini Robotics On-Device, a VLA that runs locally on a single GPU. ⚡️. It combines dexterity, robustnes….
0
30
0
RT @shreyasgite: Learning from Human Demonstrations: Show the Robot How to Act!.The pipeline is very similar to older experiments using Gem….
0
28
0
RT @danaaubakir: Today, we are introducing SmolVLA: a 450M open-source vision-language action model. Best-in-class performance and inferenc….
0
119
0
RT @svlevine: Fun project at PI: knowledge insulation for VLAs. We figured out how to train VLAs with cont. actions much more effectively b….
0
63
0
RT @RemiCadene: Meet the game-changer: LeKiwi 🥝. Crafted by @sigrobotics and @huggingface. At 1/10 the cost of the best alternative out the….
0
79
0
RT @junjungoal: How can a bimanual robot grasp objects that are initially ungraspable due to environmental constraints?. COMBO-Grasp tackle….
0
18
0
RT @YiruHelenWang: 🤖 SAM2Act is a multi-task robotics transformer which brings multi-resolution and memory-based modules from visual founda….
0
23
0
RT @leto__jean: Contact-rich, dexterous manipulation benefits from tactile sensing in two ways:. 🥇 By providing force-informed actions in h….
0
6
0
RT @XDWang101: SAM & SAM-2 are great but depend on costly annotations. Can we 'segment anything' without supervision?🤔.Yes! Check out UnSA….
0
112
0
RT @mihdalal: Can a single neural network policy generalize over poses, objects, obstacles, backgrounds, scene arrangements, in-hand object….
0
66
0
RT @moo_jin_kim: OpenVLA update: We have integrated OpenVLA into the LIBERO simulation benchmark!. TLDR: Fine-tuned OpenVLA (76.5%) outperf….
0
13
0
RT @BDuisterhof: Dense tracking of deformable objects can unlock applications in robotics, gen-AI and AR. We present DeformGS (previously M….
0
71
0
RT @wenlong_huang: What structural task representation enables multi-stage, in-the-wild, bimanual, reactive manipulation?. Introducing ReKe….
0
105
0
RT @MartinRiedmill1: RL promises to break through the ‘glass ceiling’ of imitation learning performance, but it’s tricky to scale up. In ou….
0
24
0
RT @bwww08: Introducing 𝐇𝐀𝐂𝐌𝐚𝐧++, accepted to RSS 2024! We’ve developed a 𝐜𝐨𝐧𝐭𝐚𝐜𝐭-𝐜𝐞𝐧𝐭𝐫𝐢𝐜 action space for RL for manipulation. Our appro….
0
15
0
RT @YunzhuLiYZ: Check out our #RSS2024 paper (also the Best Paper Award at the #ICRA2024 deformable object manipulation workshop) on dynami….
0
30
0
RT @letian_fu: Can vision and language models be extended to include touch? Yes! We will present a new touch-vision-language dataset collec….
0
41
0