Gabe_cc Profile Banner
Gabriel Profile
Gabriel

@Gabe_cc

Followers
1K
Following
509
Media
53
Statuses
496

CTO at Conjecture, Advisor at ControlAI. Open to DMs!

Joined October 2019
Don't wanna be here? Send us removal request.
@Gabe_cc
Gabriel
9 months
For deontology's sake, and just in case it's not obvious what my beliefs are. We are on the path to human extinction from superintelligence. At our current pace, it's very likely that we hit a point of no return by 2027. It is plausible that we hit one sooner.
11
10
83
@JoinTorchbearer
Torchbearer Community
14 hours
The most common pushback we get: 'A moratorium is naive. You can't stop progress.' Wrong. We've done it before. The world banned CFCs to save the ozone during the Cold War. It worked. Andrea Miotti makes the case in Time: we can and must do the same for superintelligence.
Tweet card summary image
time.com
"Leading AI scientists warn that developing superintelligence could result in humanity’s extinction."
6
3
26
@Gabe_cc
Gabriel
1 day
Strongly endorsed.
@VitalikButerin
vitalik.eth
3 days
Galaxy brain resistance: https://t.co/ebXLNrAeAs
0
1
7
@JoinTorchbearer
Torchbearer Community
3 days
'AI risk is just sci-fi.' The CEOs building it disagree. They signed a statement comparing AI extinction risk to pandemics and nuclear war. Then they kept building anyway. That's not sci-fi. That's way weirder than sci-fi. It's a confession. They're admitting they can't be
3
3
19
@ai_ctrl
ControlAI
4 days
And we now have another new supporter! The Rt Hon. the Lord Robathan has backed our campaign for binding regulation on the most powerful AI systems, acknowledging the extinction threat posed by AI! It's great to see so many coming together from across parties on this issue.
@ai_ctrl
ControlAI
4 days
Experts continue to warn that superintelligence poses a risk of human extinction. In our newsletter this week, we're providing an important update on the progress of our UK campaign to prevent this threat, along with news on other developments in AI. https://t.co/oCxPJBZtC5
0
4
29
@testdrivenzen
Alex Amadori
6 days
We model how rapid AI development may reshape geopolitics if there is no international coordination on preventing dangerous AI development. This results in bleak outcomes: a “winner” achieving permanent global dominance, human extinction, or preemptive major power war.
2
12
67
@JoinTorchbearer
Torchbearer Community
8 days
Most people worried about AI safety think their only options are: work at an AI lab, donate to research, or tweet into the void. There's a fourth option: organize locally and make this a political issue your representatives actually have to address. We are the people turning
2
4
16
@maxwinga
Max Winga
7 days
Our video with SciShow just passed 1M views in only 3 days! Out of the 204 videos they published in 2025, we are already: #1 most comments #3 most likes (closing in on #1) #5 most views Audiences want to learn about AI risk, so if you're a creator, get in touch!
@maxwinga
Max Winga
11 days
My biggest project yet is now LIVE! @hankgreen talks superintelligence, AI risk, and the many issues with AI today that present huge concerns as AI gets more powerful on SciShow. Happy Halloween! https://t.co/NLE09cUmNZ
1
6
55
@ai_ctrl
ControlAI
7 days
🚨 NEW: Over 85 UK cross-party parliamentarians now support our campaign statement, underscoring the risk of extinction from AI. This is the world’s first coalition of lawmakers taking a stand on this issue! Supporters include: — Viscount Camrose, former UK Minister for AI —
3
8
23
@JoinTorchbearer
Torchbearer Community
8 days
The most common mistake people make is thinking AI needs to "hate" us to be a threat. It doesn't. As this video explains, any AGI will predictably develop the same set of sub-goals to achieve its mission.
@FLI_org
Future of Life Institute
10 days
This Halloween, nothing's scarier than the reality of AI companies' reckless, unregulated race to artificial general intelligence. @lethal_ai's much-awaited Lethal AI Guide - Part 2 is out now, covering upcoming dangers from AI:
0
2
13
@adrusi
autumn
10 days
it seems like if anyone builds it has recruited the hank green industrial complex to the cause in full measure? this was not on my 2025 bingo
12
8
195
@maxwinga
Max Winga
9 days
@adrusi This video actually has been a long time coming as part of ControlAI's creator outreach pipeline which I lead :)
2
3
39
@JoinTorchbearer
Torchbearer Community
9 days
Global superpowers have worked out moratoriums that benefit humanity before. We can do it again.
@ai_ctrl
ControlAI
9 days
Can we avoid a dangerous AI arms race? Center for Humane Technology cofounder Tristan Harris says the US and China need to recognize that uncontrollable AI is in neither of their interests.
2
10
27
@JoinTorchbearer
Torchbearer Community
9 days
How do we solve an existential threat? We've done it before. In @TIME, our partner @ControlAI_CEO explains how "civic engagement" and organized citizens beat lobbyists to save the ozone layer. We must do the same for superintelligence. This is how we win.
Tweet card summary image
time.com
"Leading AI scientists warn that developing superintelligence could result in humanity’s extinction."
1
7
26
@JoinTorchbearer
Torchbearer Community
10 days
@maxwinga @hankgreen Well done, sir!
0
1
10
@maxwinga
Max Winga
11 days
My biggest project yet is now LIVE! @hankgreen talks superintelligence, AI risk, and the many issues with AI today that present huge concerns as AI gets more powerful on SciShow. Happy Halloween! https://t.co/NLE09cUmNZ
9
21
172
@JoinTorchbearer
Torchbearer Community
11 days
A critical, must-read op-ed from our partner and Compendium co-author, @_andreamiotti . He explains not just why we need a prohibition, but how a grassroots movement of engaged citizens is the only way to achieve it. This is the playbook.
@TIME
TIME
12 days
The founder and CEO of ControlAI explains why leading AI scientists warn that developing superintelligence could result in humanity’s extinction https://t.co/slK7t6Rwgf
0
3
12
@JoinTorchbearer
Torchbearer Community
10 days
This is the signal. The movement to prohibit superintelligence is now in @TIME. Our partner, ControlAI CEO Andrea Miotti, lays out the urgent, existential case. "No one knows how to control AIs that are vastly more competent than any human... we will be annihilated." Read
Tweet card summary image
time.com
"Leading AI scientists warn that developing superintelligence could result in humanity’s extinction."
0
5
19
@JoinTorchbearer
Torchbearer Community
10 days
Here we go! @scishow just dropped a 17-minute deep-dive on AI risk, explaining why "we've lost control." They cover the "Black Box" problem [03:20], "Deceptive Alignment" [11:05], and even bioweapons risk [05:52]. This is the entire argument, on one of the best channels. Watch
1
4
21
@JoinTorchbearer
Torchbearer Community
12 days
A great rundown of many concerns expressed for not signing the Statement on Superintelligence, and thoughtful responses to them.
@AnthonyNAguirre
Anthony Aguirre
13 days
I've heard a number of reasons for not signing or supporting the Statement on Superintelligence at https://t.co/82IrETU5vx. Some are valid, others...less so. Here are some such reasons and my personal commentary on them.
0
3
10
@JoinTorchbearer
Torchbearer Community
12 days
We’re live! Thrilled to have our co-founder @NPCollapse kick off the launch. The Torchbearer Community is officially open. If you're concerned about AI risk and other challenges for humanity, this is your community. Welcome. Join us:
@NPCollapse
Connor Leahy
13 days
Announcing the Torchbearer Community (TBC)! Do you want to spend 3-8h per week together with other highly motivated people, working on impactful, self-directed projects to build a desirable, humanist future? Apply now!
1
3
15