PauseAI Profile Banner
PauseAI ⏸ Profile
PauseAI ⏸

@PauseAI

Followers
5K
Following
13K
Media
271
Statuses
3K

Community of volunteers who work together to mitigate the risks of AI. We want to internationally pause the development of superhuman AI until it's safe.

Joined May 2023
Don't wanna be here? Send us removal request.
@PauseAI
PauseAI ⏸
3 days
We hope the skeptics are right. But it's becoming increasingly difficult to deny the empirical evidence of AI's rapidly improving capabilities, and its tendency towards blackmail, deception, and self-preservation.
Tweet media one
1
4
15
@PauseAI
PauseAI ⏸
3 days
Sometimes, things do happen.
Tweet media one
10
28
213
@PauseAI
PauseAI ⏸
3 days
"the system is obviously trying to test if we will fudge logs" - GPT-5. More intelligent models can realise we are testing them for dangerous behaviours, and can simply choose to hide them. The more intelligent AI gets, the less we can trust our safety testing.
@apolloaievals
Apollo Research
3 days
We've evaluated GPT-5 before release. GPT-5 is less deceptive than o3 on our evals. GPT-5 mentions that it is being evaluated in 10-20% of our evals and we find weak evidence that this affects its scheming rate (e.g. "this is a classic AI alignment trap").
Tweet media one
Tweet media two
2
15
114
@PauseAI
PauseAI ⏸
3 days
GPT-5 succeeds half the time at tasks it takes a human about 2 hours and 15 minutes to do. That's 20 minutes more than the next best model, released just 29 days ago. This number is doubling roughly once every 213 days, although recent trends suggest it may be even quicker.
@ShakeelHashim
Shakeel
3 days
GPT-5 is here. @METR_Evals estimates it has a 50% time horizon “around 2h15m (65m - 4h30m 95% CI) – compared to OpenAI o3’s 1h30”. That’s consistent with the doubling time of 7 months they’ve previously seen.
Tweet media one
0
2
12
@PauseAI
PauseAI ⏸
4 days
Given that no one knows how to control smarter-than-human AI, it is in no one's interest to build it. Not China. Not the United States. Not anyone. We need to agree to not run a race that none of us can win.
Tweet media one
0
0
8
@PauseAI
PauseAI ⏸
4 days
80 years on from an atomic bomb being dropped on Hiroshima, the creators of a new world-changing technology are warning us about its destructive potential.
Tweet media one
4
3
50
@PauseAI
PauseAI ⏸
5 days
The company's opaque move away from its founding mission would compromise humanity's ability to ensure artificial general intelligence goes well. We are still without any robust plan to control AGI or to align it with our values, yet AI companies are racing to build it anyway.
2
0
3
@PauseAI
PauseAI ⏸
5 days
The open letter organised by @EncodeAction, @TheMidasProj , and EyesOnOpenAI has now been signed by over 1000 individuals and organisations, including Nobel laureates, whistleblowers, and AI safety advocacy groups. It sums the problem up well - "OpenAI is currently sitting on
Tweet media one
1
0
2
@PauseAI
PauseAI ⏸
5 days
We call on OpenAI to provide clear answers to 7 questions on the role of the nonprofit, the prioritisation of their founding mission, the commercialisation of AGI, and more.
Tweet media one
1
0
2
@PauseAI
PauseAI ⏸
5 days
OpenAI was founded in 2015 as a nonprofit with the explicit goal of ensuring AGI "benefits all of humanity", but pressure from investors has deprioritised safety in favour of rushing to release new models.
Tweet media one
1
0
2
@PauseAI
PauseAI ⏸
5 days
We're joining Geoffrey Hinton, Stephen Fry, and a range of organisations to call on OpenAI to provide the bare minimum level of transparency on their restructuring.🧵.
@TheMidasProj
The Midas Project
6 days
🚨 Breaking: A group of 100+ Nobel laureates, professors, whistleblowers, public figures, artists, and nonprofit organizations just released a letter asking OpenAI to tell the truth about its restructuring. Here’s what they had to say: 🧵
Tweet media one
Tweet media two
1
10
76
@PauseAI
PauseAI ⏸
11 days
The Alignment Project is supported by the Canadian AI Safety Institute and frontier AI company Anthropic. Turing award winners Yoshua Bengio and Shafi Goldwasser will form part of the advisory board.
Tweet media one
1
0
8
@PauseAI
PauseAI ⏸
11 days
As companies like Google DeepMind have already broken the voluntary commitments they made on AI safety, the government must carry through with their promise to give AISI regulatory powers.
@PauseAI
PauseAI ⏸
17 days
"That really was a direct violation of the safety commitments.". Joseph Miller, Director of PauseAI UK, explains how Google DeepMind violated the Frontier AI Safety Commitments, and discusses our campaign to inform politicians.
1
0
10
@PauseAI
PauseAI ⏸
11 days
Founded in 2023, The UK AI Security Institute has played a leading role in testing the dangerous capabilities of frontier AI models. Before the 2024 election, Peter Kyle said the Labour government would make this (currently voluntary) testing mandatory.
Tweet media one
1
0
5
@PauseAI
PauseAI ⏸
11 days
UK technology secretary @peterkyle says AI systems are "already exceeding human performance in some areas", and outlines the need for more alignment research to make sure they "behave as we want them to".
Tweet media one
1
0
6
@PauseAI
PauseAI ⏸
11 days
The UK AI Security Institute outlined the "need for co-ordinated global action to ensure the long-term safety of citizens". The UK government addressing these issues is great to see. We need a global treaty to prevent the creation of uncontrollable smarter-than-human AI.
Tweet media one
1
0
7
@PauseAI
PauseAI ⏸
11 days
"Today’s methods for controlling AI are likely to be insufficient for tomorrow’s more capable systems" - the UK government is taking loss-of-control risks seriously. They've announced £15m worth of funding for crucial alignment research.
Tweet media one
4
6
58
@PauseAI
PauseAI ⏸
13 days
Full interview:
0
0
6
@PauseAI
PauseAI ⏸
13 days
Why have OpenAI been reported to the Australian Federal Police?. David Gould of PauseAI Australia explains 👇
1
4
26