
Laura Ruis
@LauraRuis
Followers
5K
Following
4K
Media
61
Statuses
1K
PhD with @_rockt and @egrefen. Inc. postdoc with @jacobandreas @MIT_CSAIL. Prev. FAIR, Google, NYU. Anon feedback: https://t.co/sbebAl53tU
London
Joined October 2019
RT @alexUnder_sky: @LauraRuis yeah, Akbir might be the goat of AI safety . And I like his Shaper (multi-agent opponent modelling as a form….
0
1
0
Highly recommend reading this, or at least the intro and conclusion. Some gems about the future of safety research.
here is my thesis “Safe Automated Research”. i worked on 3 approaches to make sure we can trust the output of automated researchers as we reach this new era of science. it was a very fun PhD
1
2
16
RT @EkdeepL: 🚨New paper! We know models learn distinct in-context learning strategies, but *why*? Why generalize instead of memorize to low….
0
64
0
RT @MinqiJiang: Recently, there has been a lot of talk of LLM agents automating ML research itself. If Llama 5 can create Llama 6, then sur….
0
175
0
RT @ammar__khairi: 🚀 Want better LLM performance without extra training or special reward models?.Happy to share my work with @Cohere_labs….
0
20
0
RT @_rockt: Fantastic work by @JonnyCoook and @silviasapora on "Programming by Backprop: LLMs Acquire Reusable Algorithmic Abstractions Dur….
0
13
0
RT @silviasapora: 🧵 Check out our latest preprint: "Programming by Backprop". What if LLMs could internalize algorithms just by reading cod….
0
20
0
Many more interesting findings in the preprint: Awesome work by first authors @JonnyCoook and @silviasapora , and collaborators @aahmadian_ , @akbirkhan , @_rockt , and @j_foerst.
4
0
31
We find models can learn an input-general understanding of algorithms from a *single* piece of code. This indicates training LLMs with next-token prediction on source code can overcome (part of) the embers of autoregression (@RTomMcCoy).
2
1
21