
Will Dabney
@wwdabney
Followers
1K
Following
12K
Media
5
Statuses
148
Research scientist at DeepMind. On the critical path to AGI. Also, a persistent optimist.
Seattle, WA
Joined May 2015
Pulling this over as research advice (for myself):. Feedback beats planning. It’s a lesson I have to relearn now and then because of the allure of an intuitive hypothesis.
I have never seen it expressed exactly like that, but I wholeheartedly endorse it:. Feedback beats planning. My plea at Meta was “No grand plans, follow the gradient of user value”.
0
0
16
RT @clarelyle: 📣📣 My team at Google DeepMind is hiring a student researcher for summer/fall 2025 in Seattle! If you're a PhD student intere….
0
74
0
RT @neuro_kim: Very pleased to share our recent work, in which we use LLMs for automated discovery of interpretable models of animal behavi….
0
10
0
Winding down from a fantastic turnout for “Normalization and effective learning rates in reinforcement learning” led by @clarelyle . Zeyu Zheng and James Martens at the poster explaining how we are solving plasticity loss at #NeurIPS2024 ! . Paper:
1
1
20
RT @JesseFarebro: Introducing the Distributional Successor Measure (DSM): a model of the range of possible futures an agent faces. As a dis….
0
25
0
Couldn’t agree more with this, and it is an area where we have seriously lost our way recently.
This is one of the hidden secrets of scalable research labs: creating template experimental codebases that allow fast testing of wild research ideas. You need your experiment, data, compute, and model abstractions just right 1/.
0
0
10
This work was a collaboration with Zeyu Zheng, @nikishin_evg, Bernardo Avila Pires, and Razvan Pascanu. A great group thinking deeply about this intriguing topic.
0
0
1
What happens when neural networks lose plasticity?.What properties cause plasticity loss? .How can we mitigate plasticity loss?. @clarelyle's work on "Understanding Plasticity in Neural Networks" thoroughly investigates these questions, providing both insight and advice.
1
1
16
Today at #ICML2023, Charline is presenting her work diving into details of representation learning with and for reinforcement learning. Her analysis can also yield keen insights into several of the more empirical papers being presented this week as well. Poster #202, 11am HST.
Today at #ICML2023: How shall we (pre)train principled representations in RL at scale from a limited computational budget?. We analyze and derive new auxiliary tasks to do so!🌟. First in-person #ICML, first time in Hawaii, last PhD paper! 🌴🌺.
0
0
20