Ruslan Partsey
@rpartsey
Followers
24
Following
54
Media
11
Statuses
27
Joined December 2017
Presenting our EmbodiedAI Workshop paper in collaboration with @UCU_Faculty_of_APPS and @rpartsey "What Do We Learn from Using Text Captions as a Form of 3D Scene Representation?" poster 76 ARCH 4E at #CVPR2024 from 1pm to 2pm today.
0
1
7
Excited to be among core Habitat contributors for this release. Habitat 3.0 is out!
Today we’re announcing Habitat 3.0, Habitat Synthetic Scenes Dataset and HomeRobot — three major advancements in the development of social embodied AI agents that can cooperate with and assist humans in daily tasks. More details on these announcements ➡️ https://t.co/WGSjkkyQx3
0
1
6
The future of robot butlers starts with mobile manipulation. We’re announcing the NeurIPS 2023 Open-Vocabulary Mobile Manipulation Challenge! - Full robot stack ✅ - Parallel sim and real evaluation ✅ - No robot required ✅👀 https://t.co/mggAbRhrLP
4
68
315
I’m delighted to be listed among the Habitat current contributors on the AI Habitat website https://t.co/icD8vLSnBS.
aihabitat.org
A platform for embodied AI research.
0
0
1
Even though this is not the first release I contributed to. But it is special for me as I was co-migrating Habitat’s configuration system to Hydra (that’s one of the biggest enhancements).
1
0
0
Habitat v0.2.3 with a lot of new features and enhancements is released!
Habitat stable version v0.2.3 released! — New Algorithm: Variable Experience Rollout https://t.co/WAG6NEKda1 — New Task: Instance ImageGoal Navigation https://t.co/tQFQVBVtJ0 — New config system: Hydra — New robots: @hellorobotinc Stretch, @BostonDynamics Spot and more ...
1
1
3
Paper Club with the author of the paper, what could be better? Let's meet to discuss the new paper and answer the question: Is Mapping Necessary for Realistic PointGoal Navigation? Lecturer and research author — @rpartsey. ⚡️Register at https://t.co/OHGpdZmWRz
0
1
6
Featuring #CVPR2022 paper by @rpartsey @erikwijmans Naoki Yokoyama @dobosevych @DhruvBatraDB @o_maksymets
https://t.co/liV5RdVbIP
CVPR Daily of today directly from New Orleans! https://t.co/3ys7VCDLb7 Let’s start one more exceptional day with the amazing CVPR 2022 program! Let's keep connecting again with colleagues and friends! Enjoy reading about upcoming presentations #CVPR @cvpr #CVPR2022 @RSIPvision
0
4
12
In zero-shot sim2real experiments, the agent does well at avoiding obstacles, but the most challenging part seems to be stopping within the success threshold. Across 9 episodes, it achieves 11% Success, 65% SoftSPL, and makes it 92% of the way to the goal (SoftSuccess).
0
0
0
Trained on Gibson train split, our agent follows a near-perfect path on both Gibson and MP3D val scenes. It scores 96% Success, 77% SPL, 76% SoftSPL on Gibson and 79% Success, 60% SPL, 69% SoftSPL on Matterport3D.
1
0
0
Our visual odometry additions advance the state of art on the Habitat Realistic Challenge PointNav Track from 71% to 94% Success (+32% relative) compared to the strongest published work and from 91% to 94% Success (+3% relative) compared to the strongest concurrent work.
1
0
0
Specifically, we apply Flip and Swap augmentations during the navigation time to achieve an ensemble effect and incorporate the information about the action taken as a fixed action-specific vector concatenated to the input of all fully connected layers.
1
0
0
First, we identify that the lack of localization is the core bottleneck by demonstrating that an agent with perfect localization can overcome the above-mentioned challenges (row 2). Then, we focus on robust odometry estimation solely from perceptual input (row 3).
1
0
0
Our pipeline consists of two components: an RNN-based navigation policy that decides which action to take to reach the goal and a CNN-based visual odometry module that estimates relative pose change that is used to update the goal coordinates to be wrt. current pose.
1
0
0
We study a problem of PointGoal navigation (“Go to (x, y)”) under realistic conditions: 1) imperfect sensors (noisy RGB-D), 2) inaccurate motors (noisy actuation), 3) no ground-truth localization (GPS+Compass sensor).
1
0
0