
Malo Bourgon
@m_bourgon
Followers
1K
Following
915
Media
88
Statuses
729
CEO at @MIRIBerkeley, and decent boulderer
Berkeley, CA
Joined July 2009
Many thanks to @SenSchumer, @SenatorRounds, @SenatorHeinrich, and @SenToddYoung, for inviting me to participate in the AI Insight Forum on Risk, Alignment, & Guarding Against Doomsday Scenarios.
I co-hosted the 8th bipartisan AI Insight Forum with Senators Rounds, Heinrich, Young focused on preventing long-term risks and doomsday scenarios. If managed properly, AI promises unimaginable potential. If left unchecked, AI poses both immediate and long-term risks.
2
1
70
RT @MIRIBerkeley: We're hosting two virtual events, open to everyone who pre-orders the book:. 1. A chat and Q&A with @So8res and special g….
0
3
0
RT @robbensinger: Senior White House officials, a retired three-star general, a Nobel laureate, and others come out to say that you should….
0
45
0
My favorite reaction I’ve gotten when sharing some of the blurbs we’ve recently received for Eliezer and Nate’s forthcoming book: If Anyone Builds It, Everyone Dies. From someone who works on AI policy in DC:
Some huge book endorsements today — from retired three-star general Jack Shanahan, former DHS Under Secretary Suzanne Spaulding, security expert Bruce Schneier, Nobel laureate Ben Bernanke, former US NSC Senior Director Jon Wolfsthal, geneticist George Church, and more!
2
37
195
RT @MIRIBerkeley: Some huge book endorsements today — from retired three-star general Jack Shanahan, former DHS Under Secretary Suzanne Spa….
0
15
0
RT @Grimezsz: Long story short I recommend the new book by Nate and Eliezer. I feel like the main thing I ever get cancelled/ in trouble….
0
89
0
RT @HumanHarlan: I just learned that existential risk from AI is actually a psyop carefully orchestrated by a shadowy cabal consisting of a….
0
74
0
RT @ESYudkowsky: Humans can be trained just like AIs. Stop giving Anthropic shit for reporting their interesting observations unless you n….
0
138
0
RT @ESYudkowsky: If Anyone Builds It, Everyone Dies now has preorders for audiobooks (Audible, Libro). The hardcover can even be preordere….
0
12
0
Really enjoyed chatting with Anthony, Liv, and the folks who came out for the Win-Win podcast's second-ever IRL event in Austin. Great audience with lots of good and tough questions. Thanks for putting it on!. (Great podcast with lots of excellent guests—def worth checking out).
Should we be racing to build superintelligent AI? . Here's my conversation with "Keep The Future Human" author @AnthonyNAguirre and MIRI CEO @m_bourgon who both strongly believe we shouldn't. a controversial take here on TPOT, but given the stakes in either direction, it's
3
6
53
RT @ESYudkowsky: @gfodor Nah, instead we burned sweat and tears and life force to make a surprisingly good book that people will actually r….
0
3
0
RT @yishan: I got to read a draft of this book (and I wrote a blurb!) and it's very good. The topic of AI alignment is complex and subtl….
0
8
0
RT @MIRIBerkeley: Eliezer Yudkowsky and Nate Soares have written a book aimed at raising the alarm about superintelligent AI for the widest….
0
5
0
RT @ChuckGrassley: Too many ppl working in AI feel they cant speak up when something is wrong Introd bipart legislation 2day 2 ensure whist….
0
55
0
RT @ESYudkowsky: Nate Soares and I are publishing a traditional book: _If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us….
0
341
0
RT @MIRIBerkeley: 📢 Announcing IF ANYONE BUILDS IT, EVERYONE DIES. A new book from MIRI co-founder @ESYudkowsky and president @So8res, publ….
0
19
0
RT @jeffclune: I greatly enjoyed “The Spectrum of AI Risks” panel at the Singapore Conference on AI. Thanks Tegan @tegan_maharaj for great….
0
5
0
RT @MIRIBerkeley: New AI governance research agenda from MIRI’s Technical Governance Team. We lay out our view of the strategic landscape a….
0
40
0