
erikwijmans
@erikwijmans
Followers
518
Following
201
Media
14
Statuses
114
PhD from @GeorgiaTech | Former Intern at Intel-ISL, @MetaAI
Joined June 2019
How do 'map-less' agents navigate? They learn to build implicit maps of their environment in their hidden state! We study 'blind' AI navigation agents and find the following 🧵
1
5
46
.@RealAAAI just announced AAAI Doctoral Dissertation Awards for 2022-24 (yes, 3 years batched). Recognizes the most impactful PhD theses in AI. 2022: Alane Suhr: Reasoning in NLP Erik Wijmans (@erikwijmans): Emergence of intelligent navigation with RL 2023: Gabriele
0
5
53
#ICLR2023 starts today and we're putting the spotlight on @erikwijmans @irrfaan and @DhruvBatraDB and their award-winning paper on how blind AI agents use memory to navigate. Congrats to Erik on this breakthrough work! @gtcomputing @iclr_conf
https://t.co/Oie3W5IuEU
cc.gatech.edu
0
4
12
Want to learn more about the paper behind this tweet? I’ll be at #ICLR2023 tomorrow (5/1) to present our Outstanding Paper Award winning work in Oral 1 Track 5 (in AD12 at 10am) and at Poster Session 1 (poster # 106 at 11:30am).
A thought-experiment to inspire scientists is to ask: If you could write only 20 papers in your lifetime, would your current work be one of them? This is one of my 20. https://t.co/qiHsZezm3P
https://t.co/JjdFfljjrS 🧵👇
0
3
16
@erikwijmans @ManolisSavva @stefmlee @irrfaan @DhruvBatraDB I'm especially pleased by this quote from the award committee regarding our paper: “I hope that the demonstrated rigor in building up an argument towards answering questions about learned representations will inform future studies across the ICLR community.”
1
4
20
Very excited that this work earned an Outstanding Paper Award at ICLR! Congratulations to @erikwijmans and my other incredible co-authors, @ManolisSavva, @stefmlee, @irrfaan, and @DhruvBatraDB! https://t.co/DfGyvPQGrU
How do 'map-less' agents navigate? They learn to build implicit maps of their environment in their hidden state! We study 'blind' AI navigation agents and find the following 🧵
1
5
61
A thought-experiment to inspire scientists is to ask: If you could write only 20 papers in your lifetime, would your current work be one of them? This is one of my 20. https://t.co/qiHsZezm3P
https://t.co/JjdFfljjrS 🧵👇
wijmans.xyz
Introduction Decades of research into intelligent animal navigation posits that organisms build and maintain inter- nal spatial representations (or maps) of their environment, that enables the...
14
122
830
Paper: https://t.co/RiXu0rzCNu Website: https://t.co/WO1e6jN20S To appear at ICLR 2023 as a spotlight. With @DhruvBatraDB @irrfaan @arimorcos @stefmlee @ManolisSavva
arxiv.org
Animal navigation research posits that organisms build and maintain internal spatial representations, or maps, of their environment. We ask if machines -- specifically, artificial intelligence...
0
5
12
4. Finally, that the emergent maps are a function of the navigation goal. Agents 'forget' excursions and detours.
1
1
4
and we can decode highly accurate occupancy grids of the environment from their memory.
1
1
5
3. The emergence of maps in their memory. An probe initialized with agent's final hidden state is able to navigate more efficiently
1
1
4
2. The emergence of collision detection neurons. We are able to decode collision vs. not with very high accuracy and there is structure in representation space. (The black dot shows the current hidden state's location.)
1
1
4
1. They are highly effective navigators, achieving 95% Success. Although not very efficient, 65 SPL.
1
1
4
📣 New paper: Emergence of Maps in the Memories of Blind Navigation Agents Humans have the ability to navigate poorly lit spaces by relying on touch and memory. Our research shows that blind AI agents can learn to do the same. Read the paper ➡️ https://t.co/XY5kNU5FwR
5
100
500
Want to learn how to combine the best of both sync and async on-policy RL? I'll be presenting our paper at #NeurIPS2022 tomorrow at 4pm (Hall J #917) describing how. Thread bellow if you just want the answer now :)
Learning intelligent behavior requires scale. Scaling on-policy RL today forces us to chose: - sync RL: high sample-efficiency but low throughput - async RL: high throughput but low sample-efficiency Can we combine the benefits of both? Yes!
0
2
13
Pre-training robots in simulation (@ai_habitat) is a safe scalable approach, but can require 1000s of GPU-hours. In their upcoming #NeurIPS2022 paper @DhruvBatraDB and team present a new system for distributed reinforcement learning.
4
24
96
(1/3) Today we’re releasing the Habitat-Matterport 3D Semantics dataset, the largest public dataset of real-world 3D spaces with dense semantic annotations. HM3D-Sem is free and available to use with FAIR's Habitat simulator: https://t.co/DbfnjI4X9U
4
121
496
VER: Scaling On-Policy RL Leads to the Emergence of Navigation in Embodied Rearrangement https://t.co/vLs6iAijBW by @erikwijmans et al. #ReinforcementLearning #ComputerScience
deepai.org
10/11/22 - We present Variable Experience Rollout (VER), a technique for efficiently scaling batched on-policy reinforcement learning in hete...
0
4
6
Paper: https://t.co/vjgwUZX9wN Code: https://t.co/aW55LclP8I Website: https://t.co/w8ldcqyCz8 With @DhruvBatraDB @irrfaan
1
2
6