Isabel Liu
@YijieIsabelLiu
Followers
61
Following
14
Media
4
Statuses
11
Undergrad @ Princeton CS working on robot planning and learning for manipulation 🤖
Joined May 2020
Happy to share some of the first work from my new lab! This project has shaped my thinking about how we can effectively combine planning and RL. Key idea: start with a planner that is slow and "robotic", then use RL to discover shortcuts that are fast and dynamic. (1/2)
Robots can plan, but rarely improvise. How do we move beyond pick-and-place to multi-object, improvisational manipulation without giving up completeness guarantees? We introduce Shortcut Learning for Abstract Planning (SLAP), a new method that uses reinforcement learning (RL) to
1
6
26
Excited to share this collaborative work led by the exceptional @YijieIsabelLiu. This work has dramatically motivated my recent belief in RL+Planning for intelligent decision making!! Key idea: Use RL to learn improvisational low-level “shortcuts” in planning.
Robots can plan, but rarely improvise. How do we move beyond pick-and-place to multi-object, improvisational manipulation without giving up completeness guarantees? We introduce Shortcut Learning for Abstract Planning (SLAP), a new method that uses reinforcement learning (RL) to
0
2
9
🤖Excited to share SLAP, @YijieIsabelLiu's new algorithm using RL to provide better skills for planning! Check out the website for code, videos, and pre-trained models:
github.com
Contribute to isabelliu0/SLAP development by creating an account on GitHub.
Robots can plan, but rarely improvise. How do we move beyond pick-and-place to multi-object, improvisational manipulation without giving up completeness guarantees? We introduce Shortcut Learning for Abstract Planning (SLAP), a new method that uses reinforcement learning (RL) to
3
6
48
For more details on our work, please see Paper: https://t.co/rqlWfQCN7l Open-source code:
github.com
Contribute to isabelliu0/SLAP development by creating an account on GitHub.
0
0
6
🙌 This work was done with my incredible advisors @tomssilver and @ben_eysenbach and amazing collaborator @Bw_Li1024 !
1
0
5
We demonstrate our results on PyBullet simulated environments. In Obstacle Tower environment: (3/5)
1
0
5
Our approach is: 1️⃣ Automatic. SLAP does not require any intermediate inputs or additional assumptions. The way we find, learn, and deploy shortcuts is completely automatic. 2️⃣ Hierarchical. SLAP leverages the hierarchical structure of planners to decompose the unstructured,
1
0
5
Robots can plan, but rarely improvise. How do we move beyond pick-and-place to multi-object, improvisational manipulation without giving up completeness guarantees? We introduce Shortcut Learning for Abstract Planning (SLAP), a new method that uses reinforcement learning (RL) to
1
19
63
I'm really excited about this! FlashSTU has great potential imo
Which architecture to use for your sequence prediction? Try out FlashSTU based on spectral transformers!! w. my fantastic *undergrad* advisees Isabel @YijieIsabelLiu , Windsor @WindsorNguyen , Yagiz Devre , Evan Dogariu, and colleague @Majumdar_Ani : https://t.co/AdI5Cf4xic
0
3
6