
Daniel Filan
@dfrsrchtwts
Followers
2K
Following
842
Media
101
Statuses
1K
Research manager at MATS. Want to usher in an era of human-friendly superintelligence, don't know how. Podcast: https://t.co/gM752yhlTd
Joined June 2020
RT @jessi_cata: There has been much criticism of the AI 2027 model. As a check, I ran a Monte Carlo model based on METR data (2032 median).….
0
5
0
RT @AXRPodcast: My apologies: if you downloaded my most recent episode, my audio cut out around 0:57:40. The issue should be fixed if you r….
0
1
0
New episode with @SamuelAlbanie, where we discuss the recent Google DeepMind paper "An Approach to Technical AGI Safety and Security"! Link to watch below.
1
4
26
New episode with @petersalib! We chat about how giving AIs rights might make them less inclined to take over, and also why legal academia is so weird. Link to watch in reply.
1
2
18
RT @austinc3301: 🚀 We're launching mentor applications for SPAR's Fall 2025 round!. @SPARexec is a part-time, remote research program where….
0
2
0
RT @Jsevillamol: A couple of weeks ago I posted a summary of Epoch's mission, clearing up some common misunderstanding of what we are tryin….
0
5
0
RT @OwainEvans_UK: Some recent talks/interviews:.Podcast on introspection, self-awareness and emergent misalignment .
0
10
0
RT @binarybits: Something that comes through clearly in the DeepSeek R1 research paper, and I wish was more broadly understood, is that the….
0
18
0
New episode with @davlindner, covering his work on MONA! Check it out - video link in reply.
1
4
30
RT @MariusHobbhahn: LLMs Often Know When They Are Being Evaluated!. We investigate frontier LLMs across 1000 datapoints from 61 distinct da….
0
81
0
New episode with @OwainEvans_UK! Covers work on emergent misalignment and more!
2
9
73
RT @safe_paper: Large Language Models Often Know When They Are Being Evaluated.Joe Needham, Giles Edkins (@gdedkins), Govind Pimpale (@Govi….
0
23
0