Matthew Macfarlane @ NeurIPS 2025
@MattVMacfarlane
Followers
924
Following
5K
Media
10
Statuses
224
Working on Open-Ended Learning, Meta Learning, World Models @AmlabUvA prev @Microsoft. Views Are Not My Own. 🏴
Amsterdam
Joined September 2024
Check out our work on the Latent Program Network for inductive program synthesis! It is a new architecture for latent program search that enables efficient test-time adaptation without the need for parameter fine-tuning.
Introducing Latent Program Network (LPN), a new architecture for inductive program synthesis that builds in test-time adaption by learning a latent space that can be used for search 🔎 Inspired by @arcprize 🧩, we designed LPN to tackle out-of-distribution reasoning tasks!
0
2
31
I'm heading to NeurIPS next week. I'm looking forward to chatting about representations and inductive approaches, including causal world models, neuro-symbolic approaches, logic programming, and program synthesis. If that sounds like you, drop me a message. I'd love to meet!
2
1
15
I'll be attending #neurips2025 this year from Dec 1st-8th 🇺🇸 Excited to catch up with friends, collaborators and make new connections. If you're working on Open-Ended Learning, Reasoning via Test-Time Compute, or World Models, DM me and let's grab coffee and chat! ☕
2
3
68
I haven't read the paper in detail but Puzzle 1 is all I need to be convinced that the discovery team at Deepmind has managed to capture true beauty! What a move ;) Hint: try to play the dumbest possible move you can think of
I am excited to share a work we did in the Discovery team at @GoogleDeepMind using RL and generative models to discover creative chess puzzles 🔊♟️♟️ #neurips2025 🎨While strong chess players intuitively recognize the beauty of a position, articulating the precise elements that
1
2
27
Happy to share that Searching Latent Program Spaces has been accepted as a Spotlight at #NeurIPS2025 ✨ It's been a pleasure to work with @ClementBonnet16 on this! See you all in San Diego 🌴 👋, https://t.co/lnIQvRbzyK
8
27
188
There is no present day Bell Labs not because BL had 10+ Nobel prizes, completely new paradigms such as information theory, or a way-larger scope than all of big AI companies combined; but because they gave complete academic freedom with 0 corporate oversight to their
We do have a Bell Labs, it's called Google! Sadly, the only thing they invent is AI. We need a Bell Labs for physical tech, especially electric tech.
3
7
74
I’ll be presenting two workshop papers: “Searching Latent Program Spaces” (oral at Programmatic Representations for Agent Learning) & “Instilling Parallel Reasoning into Language Models” (AI4Math).
1
1
10
Excited to attend #ICML2025 from Tue 15th to Sat 19th! Looking forward to connecting and discussing topics such as latent / parallel reasoning, adaptive compute, Reinforcement Learning and open endedness. DM or find me there, let’s chat!
1
1
19
Some great work from @AmirhoseinRj and @levilelis on neural policies vs programmatic policies for OOD generalization. I'm looking forward to discussing such topics further at the Workshop on Programmatic Representations for Agent Learning @icmlconf, which Levi is co-organising.
Previous work has shown that programmatic policies—computer programs written in a domain-specific language—generalize to out-of-distribution problems more easily than neural policies. Is this really the case? 🧵
0
1
10
Thrilled to see our NeurIPS 2024 paper, Sequential Monte Carlo Policy Optimisation ( https://t.co/UCaQLgGoJH), featured in Kevin's Reinforcement Learning: A Comprehensive Overview, which additionally recognises SMC as a competitive, scalable online planner. A fantastic modern
arxiv.org
Leveraging planning during learning and decision-making is central to the long-term development of intelligent agents. Recent works have successfully combined tree-based search methods and...
I'm happy to announce that v2 of my RL tutorial is now online. I added a new chapter on multi-agent RL, and improved the sections on 'RL as inference' and 'RL+LLMs' (although latter is still WIP), fixed some typos, etc. https://t.co/dWe5uNgcgp
1
9
68
Checkout @ClementBonnet16 discussing searching latent program spaces tommorow on @MLStreetTalk !
We spoke with @ClementBonnet16 at NeurIPS about his extremely innovative approach to the @arcprize using a form of test time inference where you search a latent space of a VAE before making an optimal prediction. @fchollet was so impressed, he hired Clem shortly after! 😃 -
0
2
24
3/ Searching Latent Program Spaces @ClementBonnet16 ( https://t.co/YBN6R7ADNQ) similarly uses a meta-induction network, but our insight was that gradient descent can be performed in the compact task embedding space, as opposed to the parameter space.
arxiv.org
General intelligence requires systems that acquire new skills efficiently and generalize beyond their training distributions. Although program synthesis approaches have strong generalization...
0
0
12
2/ Neural Program Meta-Induction explores transductively mapping input-output examples of a program and a single input to predict the output (named Meta). Essentially identical to ideas applied to ARC-AGI, they then performed gradient-based fine-tuning of model parameters (named
1
0
6
1/ An interesting outcome of the @arcprize 2024 was the high performance of test-time fine-tuning. It is common in the field of AI to emphasize novelty over historical connections, so I'd like to highlight an early application of this exact idea, applied to program synthesis from
2
7
68
Epic Xmas haul ⛄ looking forward to getting stuck in #activeinference @NickJChater @MITCoCoSci @ClementBonnet16
0
0
8
Are you interested in policy + people + AI? Interested in program synthesis? How about reinforcement learning?🧐 December 17th, tune in with our Research Connections group to chat about these topics with @levilelis @anna_kawakami and @DennisSoemers!
2
5
25
If you are interested in scalable search based 🔎 policy improvement operators come chat with me at our poster on SPO tommorow!
Excited to share our latest work on Sequential Monte Carlo Policy Optimisation (SPO)🔥— a scalable, search-based RL algorithm leveraging SMC as a policy improvement operator for both continuous and discrete environments! 📍 Catch us tomorrow at #NeurIPS2024 (poster #94776) from
1
3
12
Great to have Searching Latent Program Spaces ( https://t.co/YBN6R7BbDo) recognised with a 🥉3rd place paper award in the @arcprize! It was a pleasure working on this with @ClementBonnet16 . Looking forward to continuing to develop these ideas even further.
arxiv.org
General intelligence requires systems that acquire new skills efficiently and generalize beyond their training distributions. Although program synthesis approaches have strong generalization...
Today we're announcing the winners of ARC Prize 2024. We're also publishing an extensive technical report on what we learned from the competition (link in the next tweet). The state-of-the-art went from 33% to 55.5%, the largest single-year increase we've seen since 2020. The
1
5
30
I'm also attending the System-2 Reasoning workshop (Sun, 15 Dec, 8:55 a.m.).@fchollet will discuss recent approaches in the last iteration of the ARC challenge @arcprize. Happy to discuss my paper Searching Latent Program Spaces @ClementBonnet16 , submitted to the competition.
0
2
7