XRobservatory Profile Banner
Existential Risk Observatory ⏸ Profile
Existential Risk Observatory ⏸

@XRobservatory

Followers
2K
Following
768
Media
129
Statuses
1K

Reducing AI x-risk by informing the public. We propose a Conditional AI Safety Treaty: https://t.co/xUZxozlNBF

Amsterdam, Netherlands
Joined March 2021
Don't wanna be here? Send us removal request.
@XRobservatory
Existential Risk Observatory ⏸
1 year
Today, we propose the Conditional AI Safety Treaty in @TIME as a solution to AI's existential risks. AI poses a risk of human extinction, but this problem is not unsolvable. The Conditional AI Safety Treaty is a global response to avoid losing control over AI. How does it work?
23
23
113
@XRobservatory
Existential Risk Observatory ⏸
2 days
AI self-improvement is a central part of the classic Yudkowsky/Bostrom fast takeoff scenario. It has been out of fashion since the advent of LLMs, which need time to train. However: 1) Prompt optimization and scaffolding gain importance, and can self-improve 2) New paradigms may
@ben_j_todd
Benjamin Todd
3 days
New post: The case for an AI-driven acceleration is more robust than people realise. This isn't just about AI improving AI (though there's now a lot more empirical grounding for that idea). There are hardware and industrial feedback loops that are more likely to work, and still
0
1
5
@XRobservatory
Existential Risk Observatory ⏸
2 days
Are you around in Amsterdam this weekend? Feel invited to our Existential Risk Christmas Drink in Bar Bukowski! Details and RSVP here: https://t.co/YCtwkR9JSP We hope to see you there!
Tweet card summary image
luma.com
Another year has passed and we're still there!🥳🎅 Let's look back, celebrate our achievements and community, and exchange ideas on where existential risk…
0
0
4
@pauseaius
PauseAI US ⏸️
3 days
🚨NEW YORKERS: Tell Gov Hochul to SIGN the RAISE ACT before it's too late!🚨 Tell Governor Hochul to sign the RAISE Act, which has already passed the NY legislature, so it becomes law before Trump's unconstitutional AI preemption executive order is signed. Link 👇
1
8
20
@DavidSKrueger
David Krueger
4 days
"AGI is a conspiracy theory" is a conspiracy theory. You have to believe that many leading scientists are in on it. This is drivel.
5
4
56
@SenSanders
Sen. Bernie Sanders
4 days
Yes. We have to worry about AI and robotics. Some questions:
276
415
2K
@XRobservatory
Existential Risk Observatory ⏸
4 days
Former environment minister and journalist Zac Goldsmith says the UK should “resume its global leadership on AI security by championing an international agreement to prohibit the development of superintelligence until we know what we are dealing with and how to contain it.”
@ai_ctrl
ControlAI
4 days
BREAKING: Over 100 UK politicians have joined our call for binding regulation on the most powerful AI systems! This is the first time such a cross-party coalition has acknowledged the extinction threat posed by AI. The demand is unequivocal. It's time for government to deliver.
0
5
13
@ai_ctrl
ControlAI
5 days
Senator Bernie Sanders (@SenSanders) says there is a real concern that artificial superintelligence could replace human beings in controlling the planet. "That's not science fiction. That is a real fear that very knowledgeable people have."
6
15
51
@ai_ctrl
ControlAI
9 days
🎥 Watch: Conjecture CTO Gabe Alfour's opening statement to a Canadian House of Commons committee. @gabe_cc argues we can't continue to ignore warnings by top experts that superintelligence could lead to human extinction; countries must halt the development of superintelligence.
6
11
33
@XRobservatory
Existential Risk Observatory ⏸
10 days
Even in a world where we solve technical alignment, ASI might well lead to lasting suppression by a single power. We don't have to accept this. We can use peaceful measures such as export controls, sanctions, tariffs, or a trade embargo to stop ASI development. Great paper!
@testdrivenzen
Alex Amadori
11 days
We explore how a coalition of middle powers may prevent development of ASI by any actor, including superpowers. We design an international agreement that may enable middle powers to achieve this goal, without assuming initial cooperation by superpowers.
0
4
15
@XRobservatory
Existential Risk Observatory ⏸
15 days
So far, most xriskers have felt too good for anti-data center campaigning. We made fun of data center water usage and electricity consumption, even though these are actual problems. Already, these issues are big enough for politicians from left to right to win elections on.
Tweet card summary image
semafor.com
The AI industry’s new super PAC picked its first political target this month — and missed.
6
5
27
@ai_ctrl
ControlAI
15 days
🎥 Watch: MIRI CEO Malo Bourgon's opening statement before a Canadian House of Commons committee. @m_bourgon argues that superintelligence poses a risk of human extinction, but that this is not inevitable. We can kickstart a conversation that makes it possible to avert this.
7
22
72
@ai_ctrl
ControlAI
15 days
Not the Christmas cards you were hoping for. New system cards from OpenAI, Anthropic, Google and xAI indicate AIs are becoming more capable in dangerous domains such as biological weapons and automating AI research. Read more in our latest article! https://t.co/twKYGLx9z6
Tweet card summary image
controlai.news
“My trust in reality is fading” — Gemini
1
19
33
@XRobservatory
Existential Risk Observatory ⏸
16 days
.@ForHumanityPod has an amazing track record communicating the risk we all face to the public. This work is badly needed to get crucial meaningful regulation off the ground. We support GuardRailNow and so should you.
@ForHumanityPod
John Sherman
17 days
I've started a nonprofit, GuardRailNow, and I need your help. Our approach is unique: I do not come from tech. I cannot write code. I do not live in SF. I am not an EA. I am not a rationalist. I have never been to Silicon Valley. I'm a dad in Baltimore talking about AI
1
3
18
@XRobservatory
Existential Risk Observatory ⏸
17 days
Leading AGI researchers who raise billions have at best half-baked safety ideas. This sector badly needs regulation. A good start would be to implement a Conditional AI Safety Treaty as we proposed:
@dwarkesh_sp
Dwarkesh Patel
17 days
The @ilyasut episode 0:00:00 – Explaining model jaggedness 0:09:39 - Emotions and value functions 0:18:49 – What are we scaling? 0:25:13 – Why humans generalize better than models 0:35:45 – Straight-shotting superintelligence 0:46:47 – SSI’s model will learn from deployment
0
2
15
@ilex_ulmus
Holly ⏸️ Elmore
23 days
Congressional leadership is trying to force AI preemption into the National Defense Authorization Act (NDAA). "Preemption" means that the states would be banned from regulating AI. Given the lack of federal AI regulation, this would be a disaster. Call and Email NOW below 👇
2
9
32
@XRobservatory
Existential Risk Observatory ⏸
25 days
If controllable at all, AI and mass robotics can easily be used to suppress us. We don't have to live in a dystopia. We can sign treaties outlawing robot/AI armies, police, and secret services. But we need to put in the work to make it happen.
@UBTECHRobotics
UBTECH Robotics
1 month
Huge milestone achieved!📣 World's first mass delivery of humanoid robots has completed! Hundreds of UBTECH #WalkerS2 have been delivered to our partners. 🤖 The future of #industrial automation is here. March forward to transformation! 🚀 #HumanoidRobots #massproduction #AI
2
2
8
@XRobservatory
Existential Risk Observatory ⏸
1 month
Our @BartenOtto shared his insights on a talk for The School for Moral Ambition, AI Circle (thank you Karolina Gruzel for organizing!) He goes into our long-term future (and shares some critical notes on longtermism), existential risk, why AI may be the largest existential risk,
0
1
7
@XRobservatory
Existential Risk Observatory ⏸
1 month
.@mustafasuleyman is right: AI development must be humanist. To get there, we need an international AI Safety Treaty, making sure red lines are respected. @MicrosoftAI, will you use your lobbying power to make that happen? If not, humanist AI is an empty phrase.
@ai_ctrl
ControlAI
1 month
Microsoft AI CEO Mustafa Suleyman says he's seeing lots of indications that people want to build superintelligence to replace or threaten our species.
0
1
8
@XRobservatory
Existential Risk Observatory ⏸
1 month
Sometimes, it is hard to believe that this is all real. Are people really building a machine that could be about to kill every living thing on this planet? If this is not true, why are the best scientists in the world saying it is? If this is true, why is no one trying to do
@StopAI_Info
Stop AI🛑
1 month
Our public defender successfully subpoenaed Sam Altman to appear at our trial where we will be tried for non-violently blocking the front door of OpenAI on multiple occasions and blocking the road in front of their office. All of our non-violent actions against OpenAI were an
0
2
24