Gabriel
@Gabe_cc
Followers
1K
Following
509
Media
53
Statuses
496
CTO at Conjecture, Advisor at ControlAI. Open to DMs!
Joined October 2019
For deontology's sake, and just in case it's not obvious what my beliefs are. We are on the path to human extinction from superintelligence. At our current pace, it's very likely that we hit a point of no return by 2027. It is plausible that we hit one sooner.
11
10
83
The most common pushback we get: 'A moratorium is naive. You can't stop progress.' Wrong. We've done it before. The world banned CFCs to save the ozone during the Cold War. It worked. Andrea Miotti makes the case in Time: we can and must do the same for superintelligence.
time.com
"Leading AI scientists warn that developing superintelligence could result in humanity’s extinction."
6
3
26
'AI risk is just sci-fi.' The CEOs building it disagree. They signed a statement comparing AI extinction risk to pandemics and nuclear war. Then they kept building anyway. That's not sci-fi. That's way weirder than sci-fi. It's a confession. They're admitting they can't be
3
3
19
And we now have another new supporter! The Rt Hon. the Lord Robathan has backed our campaign for binding regulation on the most powerful AI systems, acknowledging the extinction threat posed by AI! It's great to see so many coming together from across parties on this issue.
Experts continue to warn that superintelligence poses a risk of human extinction. In our newsletter this week, we're providing an important update on the progress of our UK campaign to prevent this threat, along with news on other developments in AI. https://t.co/oCxPJBZtC5
0
4
29
We model how rapid AI development may reshape geopolitics if there is no international coordination on preventing dangerous AI development. This results in bleak outcomes: a “winner” achieving permanent global dominance, human extinction, or preemptive major power war.
2
12
67
Most people worried about AI safety think their only options are: work at an AI lab, donate to research, or tweet into the void. There's a fourth option: organize locally and make this a political issue your representatives actually have to address. We are the people turning
2
4
16
Our video with SciShow just passed 1M views in only 3 days! Out of the 204 videos they published in 2025, we are already: #1 most comments #3 most likes (closing in on #1) #5 most views Audiences want to learn about AI risk, so if you're a creator, get in touch!
My biggest project yet is now LIVE! @hankgreen talks superintelligence, AI risk, and the many issues with AI today that present huge concerns as AI gets more powerful on SciShow. Happy Halloween! https://t.co/NLE09cUmNZ
1
6
55
🚨 NEW: Over 85 UK cross-party parliamentarians now support our campaign statement, underscoring the risk of extinction from AI. This is the world’s first coalition of lawmakers taking a stand on this issue! Supporters include: — Viscount Camrose, former UK Minister for AI —
3
8
23
The most common mistake people make is thinking AI needs to "hate" us to be a threat. It doesn't. As this video explains, any AGI will predictably develop the same set of sub-goals to achieve its mission.
This Halloween, nothing's scarier than the reality of AI companies' reckless, unregulated race to artificial general intelligence. @lethal_ai's much-awaited Lethal AI Guide - Part 2 is out now, covering upcoming dangers from AI:
0
2
13
it seems like if anyone builds it has recruited the hank green industrial complex to the cause in full measure? this was not on my 2025 bingo
12
8
195
How do we solve an existential threat? We've done it before. In @TIME, our partner @ControlAI_CEO explains how "civic engagement" and organized citizens beat lobbyists to save the ozone layer. We must do the same for superintelligence. This is how we win.
time.com
"Leading AI scientists warn that developing superintelligence could result in humanity’s extinction."
1
7
26
My biggest project yet is now LIVE! @hankgreen talks superintelligence, AI risk, and the many issues with AI today that present huge concerns as AI gets more powerful on SciShow. Happy Halloween! https://t.co/NLE09cUmNZ
9
21
172
A critical, must-read op-ed from our partner and Compendium co-author, @_andreamiotti . He explains not just why we need a prohibition, but how a grassroots movement of engaged citizens is the only way to achieve it. This is the playbook.
The founder and CEO of ControlAI explains why leading AI scientists warn that developing superintelligence could result in humanity’s extinction https://t.co/slK7t6Rwgf
0
3
12
This is the signal. The movement to prohibit superintelligence is now in @TIME. Our partner, ControlAI CEO Andrea Miotti, lays out the urgent, existential case. "No one knows how to control AIs that are vastly more competent than any human... we will be annihilated." Read
time.com
"Leading AI scientists warn that developing superintelligence could result in humanity’s extinction."
0
5
19
Here we go! @scishow just dropped a 17-minute deep-dive on AI risk, explaining why "we've lost control." They cover the "Black Box" problem [03:20], "Deceptive Alignment" [11:05], and even bioweapons risk [05:52]. This is the entire argument, on one of the best channels. Watch
1
4
21
A great rundown of many concerns expressed for not signing the Statement on Superintelligence, and thoughtful responses to them.
I've heard a number of reasons for not signing or supporting the Statement on Superintelligence at https://t.co/82IrETU5vx. Some are valid, others...less so. Here are some such reasons and my personal commentary on them.
0
3
10
We’re live! Thrilled to have our co-founder @NPCollapse kick off the launch. The Torchbearer Community is officially open. If you're concerned about AI risk and other challenges for humanity, this is your community. Welcome. Join us:
Announcing the Torchbearer Community (TBC)! Do you want to spend 3-8h per week together with other highly motivated people, working on impactful, self-directed projects to build a desirable, humanist future? Apply now!
1
3
15