
PauseAI ⏸
@PauseAI
Followers
5K
Following
13K
Media
271
Statuses
3K
Community of volunteers who work together to mitigate the risks of AI. We want to internationally pause the development of superhuman AI until it's safe.
Joined May 2023
"the system is obviously trying to test if we will fudge logs" - GPT-5. More intelligent models can realise we are testing them for dangerous behaviours, and can simply choose to hide them. The more intelligent AI gets, the less we can trust our safety testing.
We've evaluated GPT-5 before release. GPT-5 is less deceptive than o3 on our evals. GPT-5 mentions that it is being evaluated in 10-20% of our evals and we find weak evidence that this affects its scheming rate (e.g. "this is a classic AI alignment trap").
2
15
114
GPT-5 succeeds half the time at tasks it takes a human about 2 hours and 15 minutes to do. That's 20 minutes more than the next best model, released just 29 days ago. This number is doubling roughly once every 213 days, although recent trends suggest it may be even quicker.
GPT-5 is here. @METR_Evals estimates it has a 50% time horizon “around 2h15m (65m - 4h30m 95% CI) – compared to OpenAI o3’s 1h30”. That’s consistent with the doubling time of 7 months they’ve previously seen.
0
2
12
Some basic transparency is the least we can ask for. The full open letter can be read here -
openai-transparency.org
An open letter from academics, civil society organizations, and individuals to OpenAI concerning a lack of transparency in their organization, operations, and future restructuring plans.
0
0
3
The open letter organised by @EncodeAction, @TheMidasProj , and EyesOnOpenAI has now been signed by over 1000 individuals and organisations, including Nobel laureates, whistleblowers, and AI safety advocacy groups. It sums the problem up well - "OpenAI is currently sitting on
1
0
2
We're joining Geoffrey Hinton, Stephen Fry, and a range of organisations to call on OpenAI to provide the bare minimum level of transparency on their restructuring.🧵.
🚨 Breaking: A group of 100+ Nobel laureates, professors, whistleblowers, public figures, artists, and nonprofit organizations just released a letter asking OpenAI to tell the truth about its restructuring. Here’s what they had to say: 🧵
1
10
76
As companies like Google DeepMind have already broken the voluntary commitments they made on AI safety, the government must carry through with their promise to give AISI regulatory powers.
"That really was a direct violation of the safety commitments.". Joseph Miller, Director of PauseAI UK, explains how Google DeepMind violated the Frontier AI Safety Commitments, and discusses our campaign to inform politicians.
1
0
10
UK technology secretary @peterkyle says AI systems are "already exceeding human performance in some areas", and outlines the need for more alignment research to make sure they "behave as we want them to".
1
0
6