MIRI
@MIRIBerkeley
Followers
40K
Following
1K
Media
88
Statuses
1K
The Machine Intelligence Research Institute exists to maximize the probability that the creation of smarter-than-human intelligence has a positive impact.
Berkeley, CA
Joined July 2013
Spotted "If Anyone Builds It, Everyone Dies" recommended by The Guardian as one of its books of the year in its Saturday edition! We also recommend reading it! It's important that everyone is informed about the danger of superintelligent AI.
2
9
40
the lightcone needs you to lock in Apply to the 2026 MIRI Technical Governance Team Research Fellowship
2
4
40
We're hiring! In preparation for a year of ambitious experimentation, the MIRI comms team is ramping up its capacity. Read more here:
intelligence.org
See details and apply. In the wake of the success of Nate and Eliezer’s book, If Anyone Builds It, Everyone Dies, we have an opportunity to push through a lot of doors that have cracked open, and...
0
12
104
Nate Soares, co-author of the new book “If Anyone Builds It, Everyone Dies,” speaks with George Stephanopoulos about the potential dangers of artificial superintelligence.
24
22
42
I think the work MIRI has done this past year has been some of the most impactful in its history. Very grateful for our excellent team and the huge amount of hustle they put in, and our past and future donors who make it all possible.
For the first time in six years, MIRI is running a fundraiser. Our target is $6M. Please consider supporting our efforts to alert the world—and identify solutions—to the danger of artificial superintelligence. SFF will match the first $1.6M! ⬇️
3
4
61
I think MIRI has been having good effects on the global AI conversation. I think it's worth funding MIRI so that we can continue speaking plainly about the reckless race to superintelligence.
For the first time in six years, MIRI is running a fundraiser. Our target is $6M. Please consider supporting our efforts to alert the world—and identify solutions—to the danger of artificial superintelligence. SFF will match the first $1.6M! ⬇️
7
7
137
For the first time in six years, MIRI is running a fundraiser. Our target is $6M. Please consider supporting our efforts to alert the world—and identify solutions—to the danger of artificial superintelligence. SFF will match the first $1.6M! ⬇️
14
37
202
Social media tends to frame AI debate into two caricatures: (A) Skeptics who think LLMs are doomed and AI is a bunch of hype. (B) Fanatics who think we have all the ingredients and superintelligence is imminent. But if you read what leading researchers actually say (beyond the
One point I made that didn’t come across: - Scaling the current thing will keep leading to improvements. In particular, it won’t stall. - But something important will continue to be missing.
237
553
4K
MIRI CEO Malo Bourgon explains why AI isn't like other technologies, and why it looks likely that superintelligence will be developed much earlier than previously thought:
4
13
46
MIRI CEO Malo Bourgon explains why AIs would try to preserve themselves, acquire resources, and resist their goals being changed. "This was kind of theoretical 10 years ago." "Now we're starting to see that ... these behaviors are starting to manifest."
5
15
52
Honored to have been invited to provide testimony to the #ETHI committee of the House of Commons of Canada.
🎥 Watch: MIRI CEO Malo Bourgon's opening statement before a Canadian House of Commons committee. @m_bourgon argues that superintelligence poses a risk of human extinction, but that this is not inevitable. We can kickstart a conversation that makes it possible to avert this.
5
27
100
The book everyone in the media is reading. “Absolutely compulsory,” says ABC Chair Kim Williams. I agree.
186
119
585
AI researcher Nate Soares says that despite AIs getting more powerful, we're not on track to be able to control them. "This is sort of a worst case situation. I never wanted to be here."
4
15
51
We at the MIRI Technical Governance Team just put out a report describing an example international agreement to prevent the creation of superintelligence. 🧵
10
17
109
THE AI CORRIGIBILITY DEBATE: Max Harms vs. Jeremy Gillen Max (@raelifin) and Jeremy (@jeremygillen1) are current & former @MIRIBerkeley researchers who both see superintelligent AI as an imminent extinction threat. But they disagree on whether it's worthwhile to try to aim for
3
5
31
Josh Clark of @SYSKPodcast: "[If Anyone Builds It, Everyone Dies] is REALLY good. If you had a day that you could dedicate to reading it, you could read it in a day. Really popularly written. Lots of really cool anecdotes. It’s just very good. So I strongly recommend that book."
1
1
20
Is AI really going to kill us all? Will the pursuit of superintelligence mark the end for the human race? @ESYudkowsky and @So8res debate @oren_cass on the latest American Compass Podcast:
commonplace.org
Watch now | Will the pursuit of superintelligence actually cause the extinction of the human race?
0
3
18