Tom Silver Profile
Tom Silver

@tomssilver

Followers
1,405
Following
242
Media
25
Statuses
211
Explore trending content on Musk Viewer
Pinned Tweet
@tomssilver
Tom Silver
2 months
I'm thrilled to join Princeton's faculty as an assistant professor in the ECE department starting Fall 2025 🐯 Stay tuned for the launch of my lab. We will develop generally helpful robots that learn and plan 🤖
Tweet media one
48
9
468
@tomssilver
Tom Silver
6 years
Lessons from My First Two Years of AI Research
8
75
211
@tomssilver
Tom Silver
2 months
I defended my PhD @MITEECS this week! Thanks to everyone who came out. And thanks especially to @nishanthkumar23 who not only managed the Zoom, but also got me this amazing gift…
Tweet media one
21
3
143
@tomssilver
Tom Silver
7 years
New Science paper from @vicariousai () and blog post
0
41
124
@tomssilver
Tom Silver
2 years
New preprint: "Learning Neuro-Symbolic Skills for Bilevel Planning" w/ Ashay Athalye, Josh Tenenbaum, Tomas Lozano-Perez and Leslie Kaelbling. Check it out if you're interested in hierarchical RL, TAMP, and LfD! Paper: Video:
1
4
47
@tomssilver
Tom Silver
4 years
Our paper got an oral at CoRL (rhyme!) We proposed CAMPs, a method for learning to generate abstractions of large planning problems. Stop by at @corl_conf this week! Paper: Video: Code:
1
5
43
@tomssilver
Tom Silver
10 months
Very excited to be organizing the Workshop on Learning Effective Abstractions for Planning (LEAP) at #CoRL2023 with @shah__naman @georgiachal @_ericrosen @davidpaulius and Beomjoon Kim! We have fantastic speakers lined up. Website: Deadline: September 18
Tweet media one
0
13
37
@tomssilver
Tom Silver
2 years
Looking forward to the Foundation Models for Decision Making Workshop at #NeurIPS2022 ! We'll be presenting preliminary work on using large language models for PDDL planning: These papers also look very interesting & related:
1
0
30
@tomssilver
Tom Silver
2 years
Congratulations to the twitter-less Rohan Chitnis for defending his thesis today!
Tweet media one
0
3
30
@tomssilver
Tom Silver
2 years
Excited to co-organize this CoRL 2022 workshop with Rohan Chitnis, @GregoryJStein , @Yezhou_Yang , and @jana_kosecka ! Submissions are now open.
Tweet media one
0
7
27
@tomssilver
Tom Silver
1 year
Looking forward to #AAAI23 ! On Tuesday, I'll present work on neuro-symbolic learning for robotic planning at the Bridge Session on AI & Robotics. I'll show some clips from this 1972 video of Shakey the robot and ask: how much progress have we really made?
2
0
26
@tomssilver
Tom Silver
5 years
New work: "Few-Shot Bayesian Imitation Learning with Logic over Programs" w/ Kelsey Allen, Alex Lew, Leslie Kaelbling, Josh Tenenbaum Website: Short versions to appear at ICLR SPiRL workshop and RLDM (say hello!)
1
3
25
@tomssilver
Tom Silver
1 year
If you're interested in LLMs for planning, check out our new preprint: "Generalized Planning in PDDL Domains with Pretrained Large Language Models" w/ Soham Dan, Kavitha Srinivas, Josh Tenenbaum, Leslie Kaelbling, Michael Katz ( @MITIBMLab )
0
5
25
@tomssilver
Tom Silver
3 months
I'm really excited about this "planning to practice" direction. We're able to leave Spot alone for hours and it's much improved when we get back. This is a step towards generalist robots that learn to specialize *during deployment*. Check out the paper!
@nishanthkumar23
Nishanth Kumar
3 months
Can we get robots to improve at long-horizon tasks without supervision? Our latest work tackles this problem by planning to practice! Here's a teaser showing initial task -> autonomous practice -> eval (+ interference by a gremlin👿)
1
11
92
0
0
22
@tomssilver
Tom Silver
2 years
Excited to learn that this work will be a spotlight talk at #RLDM2022 !
@tomssilver
Tom Silver
2 years
This new preprint () is the culmination of a lot of my work with Rohan Chitnis over the last two years.
4
11
67
0
2
21
@tomssilver
Tom Silver
5 years
I'm trying to collect recent benchmarks for generalization in RL. Below are the ones that I've found so far. Please reply if you know of others, or if I'm missing a state of the art.
2
4
21
@tomssilver
Tom Silver
3 years
Hmmmm
Tweet media one
1
0
20
@tomssilver
Tom Silver
3 years
Looking forward to AAAI this week! We’ll be presenting two papers on learning for planning in relational domains...
1
4
19
@tomssilver
Tom Silver
3 years
This year, Rohan Chitnis and I have been trying to figure out how we can learn models through environment interaction, like in model-based RL, and then use the models to plan w/ powerful robotic planners, like those found in task and motion planning (TAMP).
1
1
19
@tomssilver
Tom Silver
2 months
In the meantime, I'm also very excited to do a postdoc at Cornell in the @EmpriseLab with @TapoBhat starting this summer!
0
1
18
@tomssilver
Tom Silver
1 year
I'm very excited to co-organize this workshop! The submission deadline is May 19. Consider submitting your work on learning for TAMP!
@leto__jean
Jeannette Bohg
1 year
We are organizing the RSS’23 Workshop on Learning for Task and Motion Planning Contributions of short papers or Blue Sky papers are due May 19th, 2023.
2
8
41
0
1
16
@tomssilver
Tom Silver
5 years
cognitive psychology: is X built in? AI in theory : should X be built in? AI in practice: what happens if we build in X?
0
1
15
@tomssilver
Tom Silver
5 years
When is it okay to use a 'max' in reporting results for RL? Two examples: 1. Train on multiple random seeds and report only the best seed 2. At training iteration t, report the max over training iterations 1, 2, ..., t. (1/10)
2
2
14
@tomssilver
Tom Silver
6 years
It would be cool if AI/ML conferences solicited some survey/review papers. I wonder if that's ever been discussed.
0
0
13
@tomssilver
Tom Silver
5 years
(1/2) New work on "Residual Policy Learning" with Kelsey Allen, Josh Tenenbuam & Leslie Kaelbling: Simple idea: start with a meh policy, learn a residual function to improve it. Does better than deep RL or initial policy alone.
1
1
13
@tomssilver
Tom Silver
1 year
The submission portal for our RSS workshop on learning for TAMP is now open! The deadline is May 19. We're going to have some really exciting speakers. The format is hybrid, so even if you can't make it to Korea, consider submitting!
1
3
13
@tomssilver
Tom Silver
4 years
Curious about task and motion planning? Check out our new survey paper: (w/ Caelan Reed Garrett, Rohan Chitnis, Rachel Holladay, Beomjoon Kim, Leslie Pack Kaelbling, Tomás Lozano-Pérez)
0
0
13
@tomssilver
Tom Silver
9 months
The submission deadline for our LEAP Workshop at @corl_conf 2023 has been extended to Sep 30!
Tweet media one
@tomssilver
Tom Silver
10 months
Very excited to be organizing the Workshop on Learning Effective Abstractions for Planning (LEAP) at #CoRL2023 with @shah__naman @georgiachal @_ericrosen @davidpaulius and Beomjoon Kim! We have fantastic speakers lined up. Website: Deadline: September 18
Tweet media one
0
13
37
0
3
12
@tomssilver
Tom Silver
2 months
…a football signed by @JohnCUrschel 😍
Tweet media one
0
0
12
@tomssilver
Tom Silver
11 months
New blog post: TLDR; Problem Setting sections should be standard in AI papers!
0
1
12
@tomssilver
Tom Silver
8 months
I'm really excited for our upcoming #CoRL2023 workshop on learning effective abstractions for planning (LEAP)! The deadline to submit a paper is coming up quickly (Sep 30). We're accepting submissions in both the CoRL or ICRA formats. Hope to see you there!
@shah__naman
Naman Shah
8 months
5 days left!! Consider submitting your work on learning abstractions for decision-making in robotics in our ( @tomssilver , @davidpaulius , @_ericrosen , @GeorgiaChal , Beomjoon Kim) workshop #LEAP at #CoRL23 .
Tweet media one
0
7
17
0
3
11
@tomssilver
Tom Silver
2 years
Our previous work made the crucial assumption that relational state abstractions, in the form of logical predicates, were provided. The main contribution in our new preprint is to learn these predicates from data, without any supervision on their form or number.
1
0
10
@tomssilver
Tom Silver
4 years
What paper do you come back to most often for another read? For me, it is "From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning" by George Konidaris, Leslie Kaelbing, and Tomas Lozano-Perez. ()
1
0
10
@tomssilver
Tom Silver
3 years
Check out our new blog posts!
@MIT_LISLab
Learning and Intelligent Systems (LIS) @ MIT
3 years
New blog posts (two for the price of one!) Rohan Chitnis and @tomssilver discuss their recent work on learning to generate abstractions for faster planning.
1
1
26
0
0
10
@tomssilver
Tom Silver
7 months
Excited to share this new blog post led by @nishanthkumar23 @williebeit and Kathryn Le! It's a great primer on bilevel planning, the bedrock of our learning and planning work over the last few years.
@nishanthkumar23
Nishanth Kumar
7 months
Ever heard about "Bilevel Planning" or "Task and Motion Planning", but been unsure what those words mean? Ever wanted a gentle intro to these methods so you can just understand what's going on? Our new blog post might help!
3
18
81
1
0
10
@tomssilver
Tom Silver
3 years
running brew upgrade for the first time in a few months
Tweet media one
0
0
9
@tomssilver
Tom Silver
2 years
We're getting some great submissions to this CoRL workshop! Add yours by November 1 :)
@tomssilver
Tom Silver
2 years
Excited to co-organize this CoRL 2022 workshop with Rohan Chitnis, @GregoryJStein , @Yezhou_Yang , and @jana_kosecka ! Submissions are now open.
Tweet media one
0
7
27
0
0
9
@tomssilver
Tom Silver
2 years
“Hi Tom - This is a bit of a weird one. It turns out that in Ubuntu 22.04 there's now a system user, related to the Trusted Platform Module, that is also called 'tss', and that's breaking your ability to log in.” Time to legally change my name…
2
0
9
@tomssilver
Tom Silver
2 years
Next (), we considered neuro-symbolic techniques for learning two additional components: (1) samplers, which stochastically refine the abstract actions into low-level controllers; and (2) a low-level transition model.
1
0
8
@tomssilver
Tom Silver
2 years
We finally came up with an objective for learning predicates that is efficient enough to evaluate, but still closely tied to our real planning objective. The main idea is to use demonstration data to cheaply approximate the most expensive parts of bilevel planning.
1
0
8
@tomssilver
Tom Silver
2 years
This is joint work with @nishanthkumar23 , @williebeit , Tomas Lozano-Perez, Leslie Kaelbling, and Josh Tenenbaum. Also, we’re looking forward to presenting an extended abstract at #RLDM2022 !
0
0
8
@tomssilver
Tom Silver
3 years
I would like to nominate the grading interface on Gradescope for best inventions of the 21st century
0
0
8
@tomssilver
Tom Silver
2 years
Predicate learning proved to be the most challenging part of the pipeline. Among many issues that we encountered, one interesting theme was: state abstractions that are good for making predictions (bisimulation) are not necessarily good for planning.
1
0
8
@tomssilver
Tom Silver
4 years
My lab now has a twitter account!
@MIT_LISLab
Learning and Intelligent Systems (LIS) @ MIT
4 years
Hello, world! We are the Learning and Intelligent Systems group @MIT_CSAIL , headed by Leslie Pack Kaelbling & Tomás Lozano-Pérez. We work on AI, ML, and robotics, and we’ll be mostly tweeting about new work by our group.
1
10
73
0
0
7
@tomssilver
Tom Silver
5 years
when firefox knows you a little too well
Tweet media one
0
0
7
@tomssilver
Tom Silver
5 years
Really enjoying @brandondamos 's thesis: An incredible amount of cool work!
0
0
6
@tomssilver
Tom Silver
2 years
Our objective has been to learn all of the models needed for bilevel planning, as in task and motion planning (TAMP). Unlike in model-based reinforcement learning, a low-level transition model is not enough; we also need state and action abstractions for high-level planning.
1
0
7
@tomssilver
Tom Silver
1 year
Then on Saturday, I'll present our work on "Predicate Invention for Bilevel Planning" at the main conference (oral at 9:30am in Room 147B; poster at 6:15pm). Let me know via email if you're at #AAAI23 and want to chat!
0
0
6
@tomssilver
Tom Silver
6 months
Excellent blog post by @nishanthkumar23 on one of the central questions in our field right now!
@nishanthkumar23
Nishanth Kumar
6 months
There was a lot of good and interesting debate on "is scaling all we need to solve robotics?" at #CoRL23 . I spent some time writing up a blog post about all the points I heard on both sides:
22
47
257
0
0
6
@tomssilver
Tom Silver
5 years
In your AI research, which of the following do you wonder most often?
How did evolution find X?
14
How can a child learn X?
20
How did I learn X?
10
Other
4
0
0
6
@tomssilver
Tom Silver
7 months
Also check out @nishanthkumar23 and @williebeit 's #CoRL2023 paper, the latest advance in our effort to learn all the models that you need to do bilevel planning, this time in BEHAVIOR! Website: Code: Paper:
0
0
4
@tomssilver
Tom Silver
3 years
I'm sure this is not original, but "citation types" would be useful for following related work trails. E.g., [1] Directly extending approach [2] Same problem setting ([2b] used as baseline) [3] Supports an assertion but otherwise tangential [4] Unrelated self-citation :)
1
0
6
@tomssilver
Tom Silver
5 years
Looking forward to presenting this work on few-shot imitation learning with programmatic policies at AAAI 2020! Check out the updated arXiv paper here:
0
0
5
@tomssilver
Tom Silver
5 years
looks like another nice benchmark for learning and physical reasoning with tools! see also the “Tools” challenge:
@AIatMeta
AI at Meta
5 years
Facebook AI researchers have released PHYRE, a new open benchmark for assessing an #AI system’s capacity for reasoning about the physical laws that govern real-world environments.
6
180
473
0
0
5
@tomssilver
Tom Silver
3 years
"Taskography" looks like a really cool benchmark hoping to use it in my work!
0
1
5
@tomssilver
Tom Silver
1 year
Joining the #ChatGPT party. I am shocked that it can do this:
Tweet media one
2
0
5
@tomssilver
Tom Silver
2 years
A nice passage about science and social responsibility (from 1972!)
Tweet media one
1
1
4
@tomssilver
Tom Silver
5 years
(2/2) See also (concurrent/independent) work out of Berkeley, Siemens, TUHH: Very nice results on a real robot. @jackclarkSF calls this Franken-RL, which I love.
0
0
4
@tomssilver
Tom Silver
2 years
Looking forward to presenting our work on predicate invention for bilevel planning at #AAAI23 ! Paper: Code:
0
0
4
@tomssilver
Tom Silver
4 years
Looking forward to talking about PDDLGym () at the ICAPS PRL workshop today and tomorrow, with Rohan Chitnis. Registration is free -- stop by!
0
0
4
@tomssilver
Tom Silver
7 months
Starting momentarily on the 2nd floor of the Starling (Hub 3)!
@shah__naman
Naman Shah
7 months
Consider stopping by our workshop on learning abstractions for long-horizon sequential decision-making at #CoRL2023 . Room Hub 3 For more details:
Tweet media one
0
1
6
0
0
4
@tomssilver
Tom Silver
3 years
I think this is some progress, but there is still a lot to figure out, especially: - How can we learn the symbolic predicates? (c.f. work by George Kondaris; Masataro Asai) - Can we combine with work on learning behavior priors? (see discussion in NSRT paper) More to come!
0
0
4
@tomssilver
Tom Silver
5 years
Very cool and inspiring work by @KelseyRAllen , @realkevinsmith and Josh Tenenbaum
@KelseyRAllen
Kelsey Allen
5 years
Our “Tools” challenge is finally out! For our #CogSci2019 paper (with @realkevinsmith , arxiv ), we present a fun game to investigate rapid physical trial-and-error learning in humans and machines.
2
25
88
0
0
3
@tomssilver
Tom Silver
3 years
Wow, the interactive visualizations in this article are really cool!
@joeaortiz
Joseph Ortiz
3 years
Very excited to share our interactive article: A visual introduction to Gaussian Belief Propagation! It's part proposition paper, part tutorial with interactive figures throughout to give intuition. Article: Work with: @talfanevans , @AjdDavison 1/n
7
129
575
0
0
3
@tomssilver
Tom Silver
5 years
@hardmaru These slides also made a huge impact on me when I was just starting out in the field. Thanks for resurfacing them @hardmaru !
0
0
3
@tomssilver
Tom Silver
2 years
Looking forward to #RLDM2022 ! The last RLDM still ranks at the top of my list of favorite conferences. Email me if you're there & want to chat!
0
0
3
@tomssilver
Tom Silver
4 years
Just found another:
0
0
2
@tomssilver
Tom Silver
7 years
@karpathy enjoyed your YConf slides! Was wondering - where is this (artificial life?) environment from?
Tweet media one
0
0
2
@tomssilver
Tom Silver
3 years
First we looked at the case where a low-level physics simulator is known and some discrete predicates are defined. We showed how to learn symbolic operators that can be used with a "search-then-sample" TAMP planner. Paper: Video:
1
0
2
@tomssilver
Tom Silver
12 years
@BarackObama How many more shootings will there have to be before you push for real gun control legislation? #ESB
0
1
2
@tomssilver
Tom Silver
4 years
Cool work!
@svlevine
Sergey Levine
4 years
RL agents explore randomly. Humans explore by trying potential good behaviors, because we have a prior on what might be useful. Can robots get such behavioral priors? That's the idea in Parrot. arxiv web vid
1
22
128
0
0
1
@tomssilver
Tom Silver
3 years
Next we looked at the harder case where we don't have a known simulator. This led us to Neuro-Symbolic Relational Transition (NSRT) models, which can be learned from transition data and used for bilevel TAMP. Paper: Video:
1
0
2
@tomssilver
Tom Silver
5 years
@GoAstroMo i don't know how that pubmed paper was published without addressing the monster attack hypothesis
1
0
2
@tomssilver
Tom Silver
8 years
@comcast service outage in mountain view
1
3
2
@tomssilver
Tom Silver
3 years
1
0
2
@tomssilver
Tom Silver
12 years
@luttequotidien Are you okay? These rockets aren't near you are they?
0
0
1
@tomssilver
Tom Silver
8 years
Post about my proposed new test for #ArtificialIntelligence
0
1
1
@tomssilver
Tom Silver
12 years
@maureeng5 @luttequotidien sameeee it makes me weep and moan and gnash my teeth
0
0
1
@tomssilver
Tom Silver
5 years
GVG-AI + Procedural Generation Paper: Justesen et al. NeurIPS-WS 2018 () Code: SoTA: A2C with progressive procedural content generation
1
1
1
@tomssilver
Tom Silver
3 years
@mark_riedl My understanding is that the subtask graphs are like the recipes, and they learn those from data. Tagging the authors in case they want to clarify :) @sungryulls @jaejaywoo
0
0
1
@tomssilver
Tom Silver
4 years
The key idea is that a self-imposed constraint, like "I'm going to stay inside the kitchen", make some parts of a planning problem irrelevant, e.g., the weather outside. So if we can learn a good constraint to impose, we can automatically derive a problem abstraction.
1
0
1
@tomssilver
Tom Silver
12 years
@luttequotidien i dunno if they have the colbert report in italy but your boi was on last night http://t.co/KUfoJpTm
0
0
0
@tomssilver
Tom Silver
5 years
@mihail_eric see note 2 here: :)
1
0
1
@tomssilver
Tom Silver
12 years
@edwkoch picking up my tux with my mom :-) #shesmydate #jk @bonjoursteffyg
0
0
1
@tomssilver
Tom Silver
12 years
@maureeng5 at least I got the celebratory poop part right
0
0
1
@tomssilver
Tom Silver
12 years
@ben_gar I like the Reeses blast or the Oreo blast. @bonjoursteffyg likes cranberry limeades and banana shakes with caramel
1
0
1
@tomssilver
Tom Silver
7 years
@hugo_larochelle Once interactive datasets are more available, how many current UL approaches will be overkill in light of active feature learning?
0
0
1