vannguyen
@vannguyen_ng
Followers
73
Following
145
Media
4
Statuses
34
PhD student at @ImagineENPC | Previously intern at @RealityLabs and @SiemensMobility, visiting PhD student at @EPFL
Massachusetts, USA
Joined June 2021
📢 Benchmark for 6D Object Pose Estimation 📢 BOP challenge 24 has been opened! https://t.co/Od5U0LGzS9 Results to be presented at the R6D workshop at #ECCV2024 Details in comments below👇
3
13
49
10th (!!) R6D workshop @ ICCV 2025: https://t.co/bFx9WxizlG 🤖 Object pose estimation for industrial robotics 📢 Strong speakers: @svlevine, @haoshu_fang, @MaxlDur , @GusKalra , @vaheta 📊 BOP Challenge 2025 with new BOP-Industrial datasets With @ma_sundermeyer
0
2
4
Universal Beta Splatting Contributions: • Universal Beta Splatting: A unified N-dimensional representation with per-dimension shape control, enabling simultaneous modeling of spatial, angular, and temporal properties through anisotropic Beta kernels with spatial-orthogonal
4
18
175
Universal Beta Splatting Rong Liu, @ZhongpaiGao, Benjamin Planche, @MeidaC, @vannguyen_ng, Meng Zheng, @AnwesaChoudhuri, Terrence Chen, @yuewang314, Andrew Feng, Ziyan Wu tl;dr: radiance field rendering->N-dimensional anisotropic Beta kernels https://t.co/HsBAkhjf7v
0
4
17
⚠️Reconstructing sharp 3D meshes from a few unposed images is a hard and ambiguous problem. ☑️With MAtCha, we leverage a pretrained depth model to recover sharp meshes from sparse views including both foreground and background, within mins!🧵 🌐Webpage: https://t.co/di9e52XqFb
4
40
243
Not long until the 9th(!) Workshop on Recovering 6D Object Pose (R6D) at #ECCV2024, Sunday AM. Great speakers, and @vannguyen_ng, @tomhodan and @ma_sundermeyer will tell us about the #BOP Challenge 24 - the challenge is still running, but you get to see early bird results!
4
6
47
📈 BOP update: Deadline for the 2024 challenge is extended to November 29! BOP'24 focuses on model-free and model-based 2D/6D object detection, and introduces three new datasets from Meta and NVIDIA – HOT3D, HOPEv2, and HANDAL. https://t.co/5nWv125raN
0
3
15
#CVPR2024 poster presentation starting now at poster 358 🌟
📅 Friday, June 21st 🕔 17:15 - 18:45 📍 Poster Session 6 & Exhibit Hall (Arch 4A-E) - Poster ID 358 Presenting "Learnable Earth Parser: Discovering 3D Prototypes in Aerial Scans" Authors: Romain Loiseau, Elliot Vincent, Mathieu Aubry, Loic Landrieu (4/4)
0
3
14
Introducing, HOT3D. HOT3D is a new dataset from our team at Meta Reality Labs Research, to explore vision-based methods for hand-object interaction. https://t.co/aDg0rNSDSq 1/
8
60
279
We are releasing 🔥HOT3D🔥, a new egocentric dataset for 3D hand and object tracking. 833 minutes (3.7M images) of multi-view image streams showing 19 subjects interacting with 33 objects, annotated with high-quality 3D poses of hands and objects. Paper:
Introducing, HOT3D. HOT3D is a new dataset from our team at Meta Reality Labs Research, to explore vision-based methods for hand-object interaction. https://t.co/aDg0rNSDSq 1/
0
4
11
#CVPR2024 Thu 20 (AM) GigaPose: Fast and Robust Novel Object Pose Estimation via One Correspondence @vannguyen_ng @thibaultgroueix Mathieu Salzmann @VincentLepetit2 pdf: https://t.co/lP22K1uUkj webpage: https://t.co/FoutSRgjE1 code: https://t.co/iK19JiIOYc
1
3
18
#CVPR2024 Thu 20 (PM) NOPE: Novel Object Pose Estimation from a Single Image @vannguyen_ng @thibaultgroueix Georgy Ponimatkin, Yinlin Hu, Renaud Marlet, Mathieu Salzmann @VincentLepetit2 pdf: https://t.co/rMCU4zAccg web: https://t.co/RElHEaXXDq code: https://t.co/FQDemkDoM9
1
6
21
#CVPR Fri 21 (AM) OpenStreetView-5M: The Many Roads to Global Visual Geolocation @g_astruc @nico_dufour @YSiglidis @Elt_Vincent @RomainLoiseau15 @xkungfu @vannguyen_ng @captnloic + others pdf: https://t.co/0gwVnlbKSQ web: https://t.co/47K5S6M7xV
1
7
17
The new BOP challenge 2024 just opened! 🔊 This year we are also competing on end-to-end *model-free* 6D object pose estimation! 🌟 After 5min/1GPU with an onboarding video of the target object, estimate the pose of the object in cluttered scenes.
📢 Benchmark for 6D Object Pose Estimation 📢 BOP challenge 24 has been opened! https://t.co/Od5U0LGzS9 Results to be presented at the R6D workshop at #ECCV2024 Details in comments below👇
1
2
15
BOP is back with #ECCV2024. The presentation of the #BOP2023 results is just around the corner (at the CV4MR workshop, #CVPR2024), but we have no time to loose. Time for #BOP2024. Thought pose estimation of unseen objects w/ a 3D model was hard? Try it based on a dynamic video!
📢 Benchmark for 6D Object Pose Estimation 📢 BOP challenge 24 has been opened! https://t.co/Od5U0LGzS9 Results to be presented at the R6D workshop at #ECCV2024 Details in comments below👇
2
2
15
Dynamic onboarding: The object is manipulated by hands and the camera is either static (on a tripod) or dynamic (on a head-mounted device). Object masks for all video frames and the 6D object pose for the first frame are available.
1
0
0
We define two types of reference videos: Static onboarding: The object is static and the camera is moving around capturing all possible object views. Two videos are available (one upright and one upside-down). Object masks and 6D object poses are available for all video frames.
1
0
0
This year, we aim to bridge this gap by introducing new model-free tasks where CAD models of test objects are not available and methods need to rapidly learn new objects just from reference videos in max 5 min on 1 GPU.
1
0
0
While the model-based tasks are relevant for warehouse or factory settings, where CAD models of the target objects are often available, its applicability is limited in open-world scenarios.
1
0
0