MIRIBerkeley Profile Banner
MIRI Profile
MIRI

@MIRIBerkeley

Followers
40K
Following
1K
Media
88
Statuses
1K

The Machine Intelligence Research Institute exists to maximize the probability that the creation of smarter-than-human intelligence has a positive impact.

Berkeley, CA
Joined July 2013
Don't wanna be here? Send us removal request.
@MIRIBerkeley
MIRI
3 months
#7 Combined Print & E-Book Nonfiction #8 Hardcover Nonfiction
9
10
176
@ControlAI
ControlAI
9 days
Spotted "If Anyone Builds It, Everyone Dies" recommended by The Guardian as one of its books of the year in its Saturday edition! We also recommend reading it! It's important that everyone is informed about the danger of superintelligent AI.
2
9
40
@peterbarnett_
Peter Barnett
5 days
the lightcone needs you to lock in Apply to the 2026 MIRI Technical Governance Team Research Fellowship
2
4
40
@ThisWeekABC
This Week
14 days
Nate Soares, co-author of the new book “If Anyone Builds It, Everyone Dies,” speaks with George Stephanopoulos about the potential dangers of artificial superintelligence.
24
22
42
@m_bourgon
Malo Bourgon
18 days
I think the work MIRI has done this past year has been some of the most impactful in its history. Very grateful for our excellent team and the huge amount of hustle they put in, and our past and future donors who make it all possible.
@MIRIBerkeley
MIRI
19 days
For the first time in six years, MIRI is running a fundraiser. Our target is $6M. Please consider supporting our efforts to alert the world—and identify solutions—to the danger of artificial superintelligence. SFF will match the first $1.6M! ⬇️
3
4
61
@So8res
Nate Soares ⏹️
17 days
I think MIRI has been having good effects on the global AI conversation. I think it's worth funding MIRI so that we can continue speaking plainly about the reckless race to superintelligence.
@MIRIBerkeley
MIRI
19 days
For the first time in six years, MIRI is running a fundraiser. Our target is $6M. Please consider supporting our efforts to alert the world—and identify solutions—to the danger of artificial superintelligence. SFF will match the first $1.6M! ⬇️
7
7
137
@MIRIBerkeley
MIRI
19 days
For the first time in six years, MIRI is running a fundraiser. Our target is $6M. Please consider supporting our efforts to alert the world—and identify solutions—to the danger of artificial superintelligence. SFF will match the first $1.6M! ⬇️
14
37
202
@polynoamial
Noam Brown
23 days
Social media tends to frame AI debate into two caricatures: (A) Skeptics who think LLMs are doomed and AI is a bunch of hype. (B) Fanatics who think we have all the ingredients and superintelligence is imminent. But if you read what leading researchers actually say (beyond the
@ilyasut
Ilya Sutskever
23 days
One point I made that didn’t come across: - Scaling the current thing will keep leading to improvements. In particular, it won’t stall. - But something important will continue to be missing.
237
553
4K
@ControlAI
ControlAI
22 days
MIRI CEO Malo Bourgon explains why AI isn't like other technologies, and why it looks likely that superintelligence will be developed much earlier than previously thought:
4
13
46
@ControlAI
ControlAI
22 days
MIRI CEO Malo Bourgon explains why AIs would try to preserve themselves, acquire resources, and resist their goals being changed. "This was kind of theoretical 10 years ago." "Now we're starting to see that ... these behaviors are starting to manifest."
5
15
52
@m_bourgon
Malo Bourgon
24 days
Honored to have been invited to provide testimony to the #ETHI committee of the House of Commons of Canada.
@ControlAI
ControlAI
24 days
🎥 Watch: MIRI CEO Malo Bourgon's opening statement before a Canadian House of Commons committee. @m_bourgon argues that superintelligence poses a risk of human extinction, but that this is not inevitable. We can kickstart a conversation that makes it possible to avert this.
5
27
100
@hughriminton
Hugh Riminton
27 days
The book everyone in the media is reading. “Absolutely compulsory,” says ABC Chair Kim Williams. I agree.
186
119
585
@ControlAI
ControlAI
28 days
AI researcher Nate Soares says that despite AIs getting more powerful, we're not on track to be able to control them. "This is sort of a worst case situation. I never wanted to be here."
4
15
51
@peterbarnett_
Peter Barnett
1 month
We at the MIRI Technical Governance Team just put out a report describing an example international agreement to prevent the creation of superintelligence. 🧵
10
17
109
@liron
Liron Shapira
1 month
THE AI CORRIGIBILITY DEBATE: Max Harms vs. Jeremy Gillen Max (@raelifin) and Jeremy (@jeremygillen1) are current & former @MIRIBerkeley researchers who both see superintelligent AI as an imminent extinction threat. But they disagree on whether it's worthwhile to try to aim for
3
5
31
@robbensinger
Rob Bensinger ⏹️
1 month
Josh Clark of @SYSKPodcast: "[If Anyone Builds It, Everyone Dies] is REALLY good. If you had a day that you could dedicate to reading it, you could read it in a day. Really popularly written. Lots of really cool anecdotes. It’s just very good. So I strongly recommend that book."
1
1
20
@AmerCompass
American Compass
1 month
Is AI really going to kill us all? Will the pursuit of superintelligence mark the end for the human race? @ESYudkowsky and @So8res debate @oren_cass on the latest American Compass Podcast:
Tweet card summary image
commonplace.org
Watch now | Will the pursuit of superintelligence actually cause the extinction of the human race?
0
3
18
@So8res
Nate Soares ⏹️
2 months
I enjoyed all 16,000 of these convos with Hank Green
9
15
154