Levi Lelis
@levilelis
Followers
721
Following
3K
Media
23
Statuses
575
Artificial Intelligence Researcher - Associate Professor - University of Alberta - Canada CIFAR AI Chair (he/him, ele/dele).
Edmonton, Canada
Joined July 2009
I recently spoke at IPAM's Naturalistic Approaches to Artificial Intelligence Workshop, and shared some of the programmatic perspectives we're exploring in reinforcement learning research. https://t.co/uZCrJupBA2
0
2
17
The Department of Computing Science at the University of Alberta at the University of Alberta has an opening for another tenure-track faculty in robotics. Please, spread the word. I can attest to how awesome @UAlbertaCS and @AmiiThinks are! (Official job posting coming soon.)
0
3
21
Join our Reinforcement Learning Group next week on Monday, September 29th for a session with Esraa Elelimy on "Deep Reinforcement Learning with Gradient Eligibility Traces." Thanks to @rahul_narava for organizing this event ✨ Learn more: https://t.co/RR9nRvYI1r
2
4
23
Happy to share that Searching Latent Program Spaces has been accepted as a Spotlight at #NeurIPS2025 ✨ It's been a pleasure to work with @ClementBonnet16 on this! See you all in San Diego 🌴 👋, https://t.co/lnIQvRbzyK
8
27
188
I am hiring a post doc at @UAlberta , affiliated with @AmiiThinks ! We study language processing in the brain using LLMs and neuroimaging. Looking for someone with experience with ideally both neuroimaging and LLMs, or a willingness to learn. Email me Qs https://t.co/kYcuUfTfZT
0
9
18
My acceptance speech at the Turing award ceremony: Good evening ladies and gentlemen. The main idea of reinforcement learning is that a machine might discover what to do on its own, without being told, from its own experience, by trial and error. As far as I know, the first
62
225
2K
Rina’s work has inspired me since my early days as a PhD student. I’m so happy to see her receive this very well-deserved award. Congratulations, Rina!
#IJCAI2025 What inspires her research? Rina Dechter, 2025 IJCAI Research Excellence Award recipient, takes us on a journey in her #Invited talk: Graphical Models Meet Heuristic Search: A Personal Journey into Automated Reasoning 📆 22 August, 2 PM 🌐 https://t.co/Edx4Ig2AcU
0
0
6
@DimitrisPapail Test Time Compute was "invented" the same way America was "discovered".. https://t.co/a2KGDXrJKH
Inference Time Computation is NOT new--we wanted to get rid of it, but are letting it back in out of necessity.. #SundayHarangue (#NeurIPS2024 workshop edition) Noam Brown @polynoamial has been giving talks on o1 suggesting that including inference time computation was a
1
3
23
Kicking off #RLC2025 with our Workshop on Programmatic Reinforcement Learning! This workshop explores how programmatic representations can improve interpretability, generalization, efficiency, and safety in RL.
2
9
51
Armando's lecture notes are my favorite resources for program synthesis. Definitely worth reading!
The 2023 "Introduction to Program Synthesis" lecture series from Armando Solar-Lezama at @MIT_CSAIL is an amazing resource. Topics: - Inductive Synthesis - SMT & SyGuS - PS + RL - Neurosymbolic Learning "...at the intersection of programming languages, formal methods and AI."
0
0
8
Are programmatic policies really better at generalizing OOD than neural policies, or are the benchmarks biased? This position paper revisits 4 prior studies and finds neural policies can match programmatic ones - if you adjust training (sparse observation, reward shaping, etc.)
1
5
30
Sparsity can also be used to partially explain some of the successes of programmatic representations, such as FlashFill. DSLs and the way we search over the space of programs naturally give us sparse representations, which favor sample efficiency and OOD generalization.
After studying the mathematics and computation of Sparsity for nearly 20 years, I have just realized that it is much more important than I ever realized before. It truly serves as *the* model problem to understand deep networks and even intelligence to a large extent, from a
0
0
4
Attending #ICML2025 and interested in programmatic representations in ML? This workshop is for you. :-)
Our #ICML2025 Programmatic Representations for Agent Learning workshop will take place tomorrow, July 18th, at the West Meeting Room 301-305, exploring how programmatic representations can make agent learning more interpretable, generalizable, efficient, and safe! Come join us!
0
1
10
Catch Jake at #ICML2025! He’s presenting “Subgoal-Guided Heuristic Search with Learned Subgoals” in Poster Session 2 West — tomorrow, 4:30–7:00 pm. https://t.co/wgbOqRrgvh
arxiv.org
Policy tree search is a family of tree search algorithms that use a policy to guide the search. These algorithms provide guarantees on the number of expansions required to solve a given problem...
🛬 Vancouver for #icml, I’ll be presenting our work on Subgoal Guided Heuristic Search with Learned Subgoals on Tuesday from 4:30-7:00pm. Come stop by and say hello 👋
0
0
7
🛬 Vancouver for #icml, I’ll be presenting our work on Subgoal Guided Heuristic Search with Learned Subgoals on Tuesday from 4:30-7:00pm. Come stop by and say hello 👋
0
1
8
Conferences should welcome refutations and critiques in their main track. Unfortunately, they often don't. We had a critique accepted once to a leading conference, but all three reviewers recommended rejection—thank you, AC! A special track for this type of work is a good start.
New position paper! Machine Learning Conferences Should Establish a “Refutations and Critiques” Track Joint w/ @sanmikoyejo @JoshuaK92829 @yegordb @bremen79 @koustuvsinha @in4dmatics @JesseDodge @suchenzang @BrandoHablando @MGerstgrasser @is_h_a @ObbadElyas 1/6
0
0
7
Some great work from @AmirhoseinRj and @levilelis on neural policies vs programmatic policies for OOD generalization. I'm looking forward to discussing such topics further at the Workshop on Programmatic Representations for Agent Learning @icmlconf, which Levi is co-organising.
Previous work has shown that programmatic policies—computer programs written in a domain-specific language—generalize to out-of-distribution problems more easily than neural policies. Is this really the case? 🧵
0
1
10
Sometimes, neural networks (with little tweaks) are enough. Other times, solving the task requires a programmatic representation to capture algorithmic structure. Preprint:
arxiv.org
Algorithms for learning programmatic representations for sequential decision-making problems are often evaluated on out-of-distribution (OOD) problems, with the common conclusion that programmatic...
0
0
2
1. Is the representation expressive enough to find solutions that generalize? 2. Can our search procedure find a policy that generalizes?
1
0
0
So, when should we use neural vs. programmatic policies for OOD generalization? Rather than treating programmatic policies as the default, we should ask:
1
0
0
As an illustrative example, we changed the grid-world task so that a solution policy must use a queue or stack to solve a navigation task. FunSearch found a Python program that provably generalizes. As one would expect, neural nets couldn’t solve the problem.
1
0
0