Gabriel
@Gabe_cc
Followers
2K
Following
562
Media
53
Statuses
527
CTO at Conjecture, Advisor at ControlAI. Open to DMs!
Joined October 2019
For deontology's sake, and just in case it's not obvious what my beliefs are. We are on the path to human extinction from superintelligence. At our current pace, it's very likely that we hit a point of no return by 2027. It is plausible that we hit one sooner.
11
10
85
Ben Lake MP backs our campaign! In support of our campaign, @BenMLake said, "As elected representatives, we have a duty to establish safeguards against technologies that could pose serious risks to public wellbeing and safety. Waiting to see what these systems might be capable
We built a coalition of 100+ UK lawmakers who are taking a stance against the extinction risk from superintelligent AI and back regulating the most powerful AIs! From the former AI Minister to the former Defence Secretary, cross-party support is crystal clear. Time to act!
0
7
22
Rational Animations (@RationalAnimat1) has put out a solid breakdown of the catastrophic AI risks we are facing right now. It reminds us there is a whole spectrum of risk that exists today, before we get to full AGI takeover. This video covers how deepfakes already undermined
torchbearer.community
4
8
34
BREAKING: Over 100 UK politicians have joined our call for binding regulation on the most powerful AI systems! This is the first time such a cross-party coalition has acknowledged the extinction threat posed by AI. The demand is unequivocal. It's time for government to deliver.
We built a coalition of 100+ UK lawmakers who are taking a stance against the extinction risk from superintelligent AI and back regulating the most powerful AIs! From the former AI Minister to the former Defence Secretary, cross-party support is crystal clear. Time to act!
3
19
58
This is a notable cross-party initiative stewarded by ControlAI. Particularly interesting to see the calls from former Conservative ministers (the Conservative government having played such an instrumental role in raising the international salience of frontier AI safety, and
theguardian.com
Exclusive: Campaign urges PM to show independence from US and push to rein in development of superintelligence
2
5
31
An incredible coalition of over 100(!) UK politicians has publicly recognized the extinction threat from AI, and is calling for binding regulation on the most powerful AI systems. It’s great to see so many take a stand. Now, this must be followed through with action.
We built a coalition of 100+ UK lawmakers who are taking a stance against the extinction risk from superintelligent AI and back regulating the most powerful AIs! From the former AI Minister to the former Defence Secretary, cross-party support is crystal clear. Time to act!
12
23
130
We built a coalition of 100+ UK lawmakers who are taking a stance against the extinction risk from superintelligent AI and back regulating the most powerful AIs! From the former AI Minister to the former Defence Secretary, cross-party support is crystal clear. Time to act!
9
33
69
I’m pleased to support @ai_ctrl’s campaign calling for binding guardrails on advanced AI, including superintelligence. This cross-party campaign now has 100+ parliamentary supporters, showing the broad support for action on AI. https://t.co/YmJFhVNCSr
https://t.co/DnCmaNSe7U
controlai.com
At ControlAI we are fighting to keep humanity in control.
0
6
21
Connor Leahy (@NPCollapse), a co-founder of Torchbearer Community, explains the dangers of the development of artificial superintelligence. He discusses five different groups in the race (Utopists, Big-Tech, Accelerationists, Zealots, and Opportunists), CEOs lying, the Entente
2
10
34
Some great work from my colleagues on how middle power countries can band together to prevent the development of superintelligence by any actor, including superpowers. I get the question "What can a non-superpower do?" a lot. This paper I think is now the gold-standard answer.
We explore how a coalition of middle powers may prevent development of ASI by any actor, including superpowers. We design an international agreement that may enable middle powers to achieve this goal, without assuming initial cooperation by superpowers.
7
17
102
Great to see my piece on why we need a global movement to ban superintelligence in the print edition of @TIME! This is how we can prevent the extinction risk posed by AI. Countries have worked together to tackle global threats like the ozone hole, and we can do it again.
2
11
38
The AI Race narrative assumes the US and China are the only players and everyone else is just an NPC. In this new paper @testdrivenzen, @Gabe_cc, @_andreamiotti and @_ebehrens_ name the trap that the UK, Europe, and Canada are currently walking into: The Vassal's Wager. The
asi-prevention.com
Proposal for an international agreement enabling middle powers to prevent the development of artificial superintelligence
3
6
22
We explore how a coalition of middle powers may prevent development of ASI by any actor, including superpowers. We design an international agreement that may enable middle powers to achieve this goal, without assuming initial cooperation by superpowers.
12
26
86
Great piece by @LizzieGibney on the US Genesis Mission. As I told @Nature, Genesis will succeed if it enables America's scientists to leverage specialized AI. It will fail if it becomes a subsidy for companies building superintelligence that threatens national & global security.
2
7
16
We have a new supporter! Stewart Dickson MLA (@stewartcdickson) just backed our campaign calling for binding regulation on the most powerful AI systems, acknowledging the extinction risk posed by AI. 98 UK politicians now support our campaign!
0
5
19
Pleased to meet with @ai_ctrl to discuss their campaign to prevent the development of artificial superintelligence and keep humanity in control.
0
2
30
Tech Lobbyists attempting to preempt state AI safety laws by blocking them at the federal level is the deliberate creation of a regulatory vacuum. The "patchwork" argument is a trap. These lobbyists are pushing to block state AI safety laws, claiming they want uniform federal
4
13
37
So many UK lawmakers are supporting binding regulation on powerful autonomous AI systems. US lawmakers are starting to support this as well. We know that the race to artificial superintelligence is unpopular: 64% of US adults feel that superhuman AI should not be developed
John Whitby MP (@JohnWhitbyMP) just backed our campaign for binding regulation on the most powerful AI systems! 95 UK politicians now support our campaign, acknowledging the extinction risk posed by AI. It's great to see so many politicians coming together on this issue!
0
4
10
We just got our 96th supporter! Lord Bethell (@JimBethell) just backed our campaign calling for binding regulation on the most powerful AI systems, acknowledging the extinction risk posed by AI. It's great to see so many politicians take a stand on this issue!
3
6
27