Peter Barnett
@peterbarnett_
Followers
765
Following
6K
Media
73
Statuses
710
Trying to ensure the future is bright. Researcher at @MIRIBerkeley Views my own.
Berkeley, CA
Joined January 2017
I feel a bit crazy when I see people getting giddy about removing barriers to building AGI (eg cracking continual learning, building big data centers) Like, you do realize that maybe kills us, don’t you?
21
4
95
I wrote a short review of Senator Hawley and Blumenthal’s Artificial Intelligence Risk Evaluation Act of 2025, a bill which has been recently introduced and has the potential to dramatically advance the state of federal oversight of AI development. It’s a good bill! Link below
1
1
5
Finding your cash register a little short, and wondering who did it? Well, statistically speaking, most dollars stolen / embezzled / defrauded in the US in 2025 were stolen by Sam Altman.
We completed our recapitalization. The non-profit, the OpenAI Foundation, is now one of the best resourced philanthropies ever, with equity valued at ~$130B. It continues to control the OpenAI for-profit, which is now a public benefit corporation. https://t.co/TevJDA3QwB
24
62
1K
Seems bad
The Midas Project commends Attorneys General Kathy Jennings and Rob Bonta for their diligent work over the past year. That said, significant concerns remain about whether this restructuring adequately protects the @OpenAI mission, and the public. https://t.co/CtnDGOkjPF
1
11
101
The Midas Project commends Attorneys General Kathy Jennings and Rob Bonta for their diligent work over the past year. That said, significant concerns remain about whether this restructuring adequately protects the @OpenAI mission, and the public. https://t.co/CtnDGOkjPF
themidasproject.com
The Midas Project commends Attorneys General Kathy Jennings and Rob Bonta... but significant concerns remain about whether this restructuring adequately preserves OpenAI's founding commitments to...
2
10
74
Can someone please tell me what to think/feel about the OpenAI non profit stuff
9
0
31
its bad on purpose to make u click
.@BernieSanders claims that our goal is to "make it easier to pay workers less" but I notice that we pay our employees vastly more than he pays his staffers.
6
0
60
Do you miss 2021, when smart people would post extremely long dialogues about AGI risk? Do you have a maybe unhealthy audio habit? Can you tolerate hours of AI generated audio? Well, you are in luck! I made an AI generated podcast of the 2021 MIRI Conversations! Links⬇️
2
1
13
The models trained in this paper (and frontier models in general!) do whacky stuff, and we have no clue why. Read the transcripts. It's actually bonkers. The field of AI alignment is not ready for what's coming. Humanity is not ready. We need to back off.
1
1
4
Reasonable review that doesn't miss the forest for the trees
4
4
60
"The AI doomers are not making an argument. They’re selling a worldview." I think they're making an argument that this worldview is correct and then making further arguments about the implications. (Overall reasonable article where the author did actually engage with the book)
8
4
105
if anyone tildes it, everyone denies (logical statements)
0
0
6
Woah huge! And you’re telling me there’s a whole book of stuff like this?? That’s awesome!!!😮 And then three books worth of online supplementary materials????😲 With a draft treaty!?!??? 🤯
AI has drives and behaviors that nobody asked for and nobody wanted—which may prove to be disastrous, Eliezer Yudkowsky and Nate Soares write.
1
3
28
Stylish shirt arrived just in time for the launch
📢 Announcing IF ANYONE BUILDS IT, EVERYONE DIES A new book from MIRI co-founder @ESYudkowsky and president @So8res, published by @littlebrown. 🗓️ Out September 16, 2025 Details and preorder👇
0
0
28
🎧 Want early access to the audiobook? Quote-repost this post with anything related to the book. At 5pm ET we’ll pick the top 15 quote-reposts (details below) and DM them an early copy of the audiobook. (We have some redemption codes that will be no use to us in <24 hours.)
11
12
81
In today's NYT, I profiled Eliezer Yudkowsky, AI's OG prophet of doom, and one of the most interesting (and divisive!) characters in modern Silicon Valley. From inspiring OpenAI and DeepMind, to oneshotting a generation of young rationalists with Harry Potter fanfic, to building
nytimes.com
Eliezer Yudkowsky has spent the past 20 years warning A.I. insiders of danger. Now, he’s making his case to the public.
36
40
350