MaxNadeau_ Profile Banner
Max Nadeau Profile
Max Nadeau

@MaxNadeau_

Followers
1K
Following
8K
Media
19
Statuses
409

Funding research to make AIs more understandable, truthful, and dependable at @open_phil.

Berkeley, CA
Joined November 2017
Don't wanna be here? Send us removal request.
@MaxNadeau_
Max Nadeau
9 months
đŸ§” Announcing @open_phil's Technical AI Safety RFP! We're seeking proposals across 21 research areas to help make AI systems more trustworthy, rule-following, and aligned, even as they become more capable.
4
84
251
@MaxNadeau_
Max Nadeau
7 days
Surprisingly high!
@EpochAIResearch
Epoch AI
7 days
By the end of the year, AI data centers could collectively see >$300 billion in investment, around 1% of US GDP. That’s bigger than the Apollo Program (0.8%) and Manhattan Project (0.4%) at their peaks.
1
0
11
@MaxNadeau_
Max Nadeau
11 days
Here is the posting! https://t.co/K3TyrKoT8i
@open_phil
Open Philanthropy
12 days
(1/8) Open Philanthropy’s Technical AI Safety team is recruiting grantmakers to support research aimed at reducing catastrophic risks from advanced AI. We’re hiring at all levels of seniority.
0
0
5
@MaxNadeau_
Max Nadeau
11 days
And OP is a great place to work—we take impact very seriously and we make big things happen in the world. I really like my coworkers; they're sharp, easy to work with, and put the mission first.
1
1
17
@MaxNadeau_
Max Nadeau
11 days
And to be clear, the above tweet is only about OP's Technical AI Safety team. Our purview is funding technical research, mostly ML, related to AI safety/security/interp/alignment/etc. Other teams at OP fund different work than we do, like AI policy research.
1
0
16
@MaxNadeau_
Max Nadeau
11 days
In 2024, OP's Technical AI Safety team had 2 grantmakers and spent $40m. In 2025, we had 3 and spent $130m. If you join the team, it will enable us to spend even more next year, and we’ll be directly influenced by your takes. Come work with me!
4
8
85
@MaxNadeau_
Max Nadeau
14 days
If the water usage wasn't bad enough, now we're learning that AI uses non-commutative operations—heaven forfend!
@boazbaraktcs
Boaz Barak
14 days
@emollick @EpochAIResearch The article also slanders non-commutative algebra.
8
31
855
@MaxNadeau_
Max Nadeau
19 days
Reading DeepSeek CoTs now requires expertise in multiple languages/cultures
@amydeng_
Amy Deng
20 days
I went down a little rabbit hole to read Deepseek V3’s mindđŸ§” It’s kinda fun bc when I do math my internal CoT is also bilingual, albeit a lot less funky lol. The model’s CoT it makes a bit more sense if you know internet slangs & some Chinese culture
1
0
5
@MaxNadeau_
Max Nadeau
2 months
Applications close *today* for the Astra fellowship (for some applicants; others can apply til Oct 10). You should apply if you want to work with me or any of the other mentors in this program. As a Fellow working with me, you'd be involved with OP's technical AI safety
@sleight_henry
🚀Henry is launching the Astra Research Program!
2 months
🏁ONE WEEK LEFT to apply for an early decision for Astra🏁 If you need visa support to participate, or if you’ve applied for @matsprogram, your application deadline for Astra is Sept 26th. âŹ‡ïžWe're also excited to announce new mentors across every stream! (1/4)
0
1
8
@Sauers_
Sauers
2 months
The Watchers DON'T want you to know this one simple trick to disclaim illusions
8
25
228
@SebastienBubeck
Sebastien Bubeck
2 months
It's becoming increasingly clear that gpt5 can solve MINOR open math problems, those that would require a day/few days of a good PhD student. Ofc it's not a 100% guarantee, eg below gpt5 solves 3/5 optimization conjectures. Imo full impact of this has yet to be internalized...
134
283
2K
@LRudL_
Rudolf Laine
2 months
shelf of AI books, edited to have the titles that the cover image makes it look like they should
9
24
280
@MaxNadeau_
Max Nadeau
2 months
UKAISI has both exclusive-to-government access AND world-class jailbreaking researchers. Unique place to work for people interested these sorts of safeguards.
@alxndrdavies
Xander Davies
2 months
Excited to share details on two of our longest running and most effective safeguard collaborations, one with Anthropic and one with OpenAI. We've identified—and they've patched—a large number of vulnerabilities and together strengthened their safeguards. đŸ§” 1/6
0
0
10
@MaxNadeau_
Max Nadeau
2 months
I will be blogging!
@asteriskmgzn
Asterisk
2 months
Introducing: Asterisk's AI Fellows. Hailing from Hawaii to Dubai, and many places between, our AI Fellows will be writing on law, military, development economics, evals, China, biosecurity, and much more. We can’t wait to share their writing with you. https://t.co/rjLp2RAjME
1
1
69
@MaxNadeau_
Max Nadeau
2 months
Yep totally agreed with Ryan's goldilocks position here: small differences in chances in <2yr timelines are action relevant, big differences in chances of <10yr timelines are action-relevant, but other timelines differences are not
@RyanPGreenblatt
Ryan Greenblatt
2 months
While I sometimes write about AGI timelines, I think moderate differences in timelines usually aren't very action relevant. Pretty short timelines (<10 years) seem likely enough to warrant strong action and it's hard to very confidently rule out things going crazy in <3 years.
0
0
5
@MaxNadeau_
Max Nadeau
2 months
Really good graph... progress on math is zipping along
@EpochAIResearch
Epoch AI
2 months
In less than a year LLMs have climbed most of the high school math contest ladder. Every tier of problem difficulty has either been saturated or is well on its way—except for the very highest tier.
1
1
8
@MaxNadeau_
Max Nadeau
3 months
This is a much more sensible way to conceptualize and evaluate CoT monitoring than the ways that dominate the discourse
@SydneyVonArx
Sydney
3 months
The terms “CoT” and reasoning trace make it sound like the CoT is a summary of an LLM’s reasoning. But IMO it’s more accurate to view CoT as a tool models use to think better. CoT monitoring is about tracking how models use this tool so we can glean insight into their
0
0
3
@MaxNadeau_
Max Nadeau
3 months
An interpretability method, if you can keep it!
@METR_Evals
METR
3 months
Prior work has found that Chain of Thought (CoT) can be unfaithful. Should we then ignore what it says? In new research, we find that the CoT is informative about LLM cognition as long as the cognition is complex enough that it can’t be performed in a single forward pass.
0
0
6
@jasnonaz
Jason Ganz
3 months
My god they've actually done it
@jasnonaz
Jason Ganz
9 months
Dario Amodei: "My friends, we have but two years to rigorously prepare the global community for the tumultuous arrival of AGI" Sam Altman: "we r gonna build a $55 trillion data center" Demis Hassabis: "I've created the worlds most accurate AI simulation of a Volcano."
13
25
1K
@alxndrdavies
Xander Davies
4 months
We at @AISecurityInst worked with @OpenAI to test & improve Agent’s safeguards prior to release. A few notes on our experienceđŸ§” 1/4
3
29
152
@MaxNadeau_
Max Nadeau
4 months
Or at least, biggest bottleneck in AI safety _research_
0
0
3