MattVMacfarlane Profile Banner
Matthew Macfarlane @ NeurIPS 2025 Profile
Matthew Macfarlane @ NeurIPS 2025

@MattVMacfarlane

Followers
924
Following
5K
Media
10
Statuses
224

Working on Open-Ended Learning, Meta Learning, World Models @AmlabUvA prev @Microsoft. Views Are Not My Own. 🏴󠁧󠁢󠁳󠁣󠁴󠁿

Amsterdam
Joined September 2024
Don't wanna be here? Send us removal request.
@MattVMacfarlane
Matthew Macfarlane @ NeurIPS 2025
1 year
Check out our work on the Latent Program Network for inductive program synthesis! It is a new architecture for latent program search that enables efficient test-time adaptation without the need for parameter fine-tuning.
@ClementBonnet16
Clem Bonnet
1 year
Introducing Latent Program Network (LPN), a new architecture for inductive program synthesis that builds in test-time adaption by learning a latent space that can be used for search 🔎 Inspired by @arcprize 🧩, we designed LPN to tackle out-of-distribution reasoning tasks!
0
2
31
@ClementBonnet16
Clem Bonnet
11 days
I'm heading to NeurIPS next week. I'm looking forward to chatting about representations and inductive approaches, including causal world models, neuro-symbolic approaches, logic programming, and program synthesis. If that sounds like you, drop me a message. I'd love to meet!
2
1
15
@MattVMacfarlane
Matthew Macfarlane @ NeurIPS 2025
13 days
I'll be attending #neurips2025 this year from Dec 1st-8th 🇺🇸 Excited to catch up with friends, collaborators and make new connections. If you're working on Open-Ended Learning, Reasoning via Test-Time Compute, or World Models, DM me and let's grab coffee and chat! ☕
2
3
68
@basvanopheusden
basvanopheusden
1 month
I haven't read the paper in detail but Puzzle 1 is all I need to be convinced that the discovery team at Deepmind has managed to capture true beauty! What a move ;) Hint: try to play the dumbest possible move you can think of
@TZahavy
Tom Zahavy
1 month
I am excited to share a work we did in the Discovery team at @GoogleDeepMind using RL and generative models to discover creative chess puzzles 🔊♟️♟️ #neurips2025 🎨While strong chess players intuitively recognize the beauty of a position, articulating the precise elements that
1
2
27
@MattVMacfarlane
Matthew Macfarlane @ NeurIPS 2025
3 months
Happy to share that Searching Latent Program Spaces has been accepted as a Spotlight at #NeurIPS2025 ✨ It's been a pleasure to work with @ClementBonnet16 on this! See you all in San Diego 🌴 👋, https://t.co/lnIQvRbzyK
8
27
188
@MountainOfMoon
Arya Mazumdar
3 months
There is no present day Bell Labs not because BL had 10+ Nobel prizes, completely new paradigms such as information theory, or a way-larger scope than all of big AI companies combined; but because they gave complete academic freedom with 0 corporate oversight to their
@Noahpinion
Noah Smith 🐇🇺🇸🇺🇦🇹🇼
3 months
We do have a Bell Labs, it's called Google! Sadly, the only thing they invent is AI. We need a Bell Labs for physical tech, especially electric tech.
3
7
74
@MattVMacfarlane
Matthew Macfarlane @ NeurIPS 2025
5 months
I’ll be presenting two workshop papers: “Searching Latent Program Spaces” (oral at Programmatic Representations for Agent Learning) & “Instilling Parallel Reasoning into Language Models” (AI4Math).
1
1
10
@MattVMacfarlane
Matthew Macfarlane @ NeurIPS 2025
5 months
Excited to attend #ICML2025 from Tue 15th to Sat 19th! Looking forward to connecting and discussing topics such as latent / parallel reasoning, adaptive compute, Reinforcement Learning and open endedness. DM or find me there, let’s chat!
1
1
19
@MattVMacfarlane
Matthew Macfarlane @ NeurIPS 2025
5 months
Some great work from @AmirhoseinRj and @levilelis on neural policies vs programmatic policies for OOD generalization. I'm looking forward to discussing such topics further at the Workshop on Programmatic Representations for Agent Learning @icmlconf, which Levi is co-organising.
@levilelis
Levi Lelis
5 months
Previous work has shown that programmatic policies—computer programs written in a domain-specific language—generalize to out-of-distribution problems more easily than neural policies. Is this really the case? 🧵
0
1
10
@MattVMacfarlane
Matthew Macfarlane @ NeurIPS 2025
9 months
Thrilled to see our NeurIPS 2024 paper, Sequential Monte Carlo Policy Optimisation ( https://t.co/UCaQLgGoJH), featured in Kevin's Reinforcement Learning: A Comprehensive Overview, which additionally recognises SMC as a competitive, scalable online planner. A fantastic modern
Tweet card summary image
arxiv.org
Leveraging planning during learning and decision-making is central to the long-term development of intelligent agents. Recent works have successfully combined tree-based search methods and...
@sirbayes
Kevin Patrick Murphy
9 months
I'm happy to announce that v2 of my RL tutorial is now online. I added a new chapter on multi-agent RL, and improved the sections on 'RL as inference' and 'RL+LLMs' (although latter is still WIP), fixed some typos, etc. https://t.co/dWe5uNgcgp
1
9
68
@MattVMacfarlane
Matthew Macfarlane @ NeurIPS 2025
10 months
Checkout @ClementBonnet16 discussing searching latent program spaces tommorow on @MLStreetTalk !
@MLStreetTalk
Machine Learning Street Talk
10 months
We spoke with @ClementBonnet16 at NeurIPS about his extremely innovative approach to the @arcprize using a form of test time inference where you search a latent space of a VAE before making an optimal prediction. @fchollet was so impressed, he hired Clem shortly after! 😃 -
0
2
24
@MattVMacfarlane
Matthew Macfarlane @ NeurIPS 2025
11 months
3/ Searching Latent Program Spaces @ClementBonnet16 ( https://t.co/YBN6R7ADNQ) similarly uses a meta-induction network, but our insight was that gradient descent can be performed in the compact task embedding space, as opposed to the parameter space.
Tweet card summary image
arxiv.org
General intelligence requires systems that acquire new skills efficiently and generalize beyond their training distributions. Although program synthesis approaches have strong generalization...
0
0
12
@MattVMacfarlane
Matthew Macfarlane @ NeurIPS 2025
11 months
2/ Neural Program Meta-Induction explores transductively mapping input-output examples of a program and a single input to predict the output (named Meta). Essentially identical to ideas applied to ARC-AGI, they then performed gradient-based fine-tuning of model parameters (named
1
0
6
@MattVMacfarlane
Matthew Macfarlane @ NeurIPS 2025
11 months
1/ An interesting outcome of the @arcprize 2024 was the high performance of test-time fine-tuning. It is common in the field of AI to emphasize novelty over historical connections, so I'd like to highlight an early application of this exact idea, applied to program synthesis from
2
7
68
@bryan_johnson
Bryan Johnson
11 months
Whatever you think, you’re underestimating AI.
433
383
5K
@MattVMacfarlane
Matthew Macfarlane @ NeurIPS 2025
1 year
Epic Xmas haul ⛄ looking forward to getting stuck in #activeinference @NickJChater @MITCoCoSci @ClementBonnet16
0
0
8
@Cohere_Labs
Cohere Labs
1 year
Are you interested in policy + people + AI? Interested in program synthesis? How about reinforcement learning?🧐 December 17th, tune in with our Research Connections group to chat about these topics with @levilelis @anna_kawakami and @DennisSoemers!
2
5
25
@MattVMacfarlane
Matthew Macfarlane @ NeurIPS 2025
1 year
If you are interested in scalable search based 🔎 policy improvement operators come chat with me at our poster on SPO tommorow!
@instadeepai
InstaDeep
1 year
Excited to share our latest work on Sequential Monte Carlo Policy Optimisation (SPO)🔥— a scalable, search-based RL algorithm leveraging SMC as a policy improvement operator for both continuous and discrete environments! 📍 Catch us tomorrow at #NeurIPS2024 (poster #94776) from
1
3
12
@MattVMacfarlane
Matthew Macfarlane @ NeurIPS 2025
1 year
Great to have Searching Latent Program Spaces ( https://t.co/YBN6R7BbDo) recognised with a 🥉3rd place paper award in the @arcprize! It was a pleasure working on this with @ClementBonnet16 . Looking forward to continuing to develop these ideas even further.
Tweet card summary image
arxiv.org
General intelligence requires systems that acquire new skills efficiently and generalize beyond their training distributions. Although program synthesis approaches have strong generalization...
@fchollet
François Chollet
1 year
Today we're announcing the winners of ARC Prize 2024. We're also publishing an extensive technical report on what we learned from the competition (link in the next tweet). The state-of-the-art went from 33% to 55.5%, the largest single-year increase we've seen since 2020. The
1
5
30
@MattVMacfarlane
Matthew Macfarlane @ NeurIPS 2025
1 year
I'm also attending the System-2 Reasoning workshop (Sun, 15 Dec, 8:55 a.m.).@fchollet will discuss recent approaches in the last iteration of the ARC challenge @arcprize. Happy to discuss my paper Searching Latent Program Spaces @ClementBonnet16 , submitted to the competition.
0
2
7