Alex Amadori Profile
Alex Amadori

@testdrivenzen

Followers
74
Following
81
Media
8
Statuses
21

Policy research at @ConjectureAI

London
Joined October 2024
Don't wanna be here? Send us removal request.
@testdrivenzen
Alex Amadori
7 days
We model how rapid AI development may reshape geopolitics if there is no international coordination on preventing dangerous AI development. This results in bleak outcomes: a “winner” achieving permanent global dominance, human extinction, or preemptive major power war.
2
12
67
@liron
Liron Shapira
2 years
Marc Andreessen (@pmarca)'s recent essay, “Why AI Will Save the World”, didn't meet the standards of discourse. ♦️ Claiming AI will be safe & net positive is his right, but the way he’s gone about making that claim has been undermining conversation quality. 🧵 Here's the proof:
35
91
542
@testdrivenzen
Alex Amadori
6 days
What you're seeing here is a process that can generate useful decisions in a domain (math), and it can do it across novel unexpected variations. Useful decisions are very "rare" in this domain, and you can only guess them by having an ability to pick them out of the the
@wtgowers
Timothy Gowers @wtgowers
7 days
@fchollet What would you say of an LLM that can answer questions about a piece of maths, giving sensible helpful answers almost all the time but occasionally saying things that are bizarrely wrong? Is that fake understanding, partial understanding, or what?
0
0
3
@testdrivenzen
Alex Amadori
7 days
This is a paper by me, @Gabe_cc , @_andreamiotti , and @_ebehrens_ Check it out here:
0
0
13
@testdrivenzen
Alex Amadori
7 days
Being a democracy or a middle power increases risks from these factors. As superpower companies automate their economy, middle powers will lose diplomatic leverage. Democracies are vulnerable to mass manipulation and concentration of power is antithetical to their values.
1
0
13
@testdrivenzen
Alex Amadori
7 days
If AI progress plateaus before automating AI R&D, future trajectories are harder to predict. We don’t model this case in detail, but we point out some risks, like: - New (potentially MAD-breaking) military capabilities - Extreme concentration of power - Large scale manipulation
1
0
13
@testdrivenzen
Alex Amadori
7 days
Individually, middle powers are largely unable to stop superpowers' AI R&D programs. They may pursue the Vassal's Wager strategy, allying with a superpower, hoping that it wins. However, they would have no recourse against an ASI-wielding superpower violating their sovereignty.
1
0
14
@testdrivenzen
Alex Amadori
7 days
If the course of AI R&D is predictable, or if AI R&D ops are highly visible to opponents, laggards in the race eventually realize time is not on their side. Thus, they likely initiate a violent strike aimed at disabling the leader's AI program, leading to Major Power War.
1
0
14
@testdrivenzen
Alex Amadori
7 days
Once AI automates the key bottlenecks of AI R&D, a single factor will determine geopolitical outcomes: who controls the strongest AI. If the best AI also produces the fastest improvements, the leader’s advantage can only grow with time until it produces a DSA or control is lost.
1
0
14
@hlntnr
Helen Toner
7 months
Sharing the very first post on my new 𝕤𝕦𝕓𝕤𝕥𝕒𝕔𝕜*, about the weird boiling frog of AI timelines. Somehow expecting human-level systems in the 2030s is now a conservative take? 2 more posts to come this week, then a slower pace. Link in thread. Subscribe, tell your friends!
14
47
331
@testdrivenzen
Alex Amadori
2 months
Non-experts often hear only one story about the future of AI: the one that dominates in their social circle. With this, we hope to provide a concise and digestible overview of the prevalent stances among experts on the expected trajectory of AI progress and its consequences.
1
0
9
@testdrivenzen
Alex Amadori
2 months
The Replacement doctrine expects weaker AI, incapable of seizing a decisive strategic advantage. Such AI could greatly accelerate scientific and economic progress, but risks causing geopolitical and economic destabilization through extreme unemployment and mass manipulation.
3
1
9
@testdrivenzen
Alex Amadori
2 months
Dominance and Extinction hold that ASI will be developed soon; they disagree on whether control will be maintained over such a system. If control is maintained, the first ASI-wielding actor achieves utter strategic advantage over all others; Otherwise, human extinction follows.
2
1
8
@testdrivenzen
Alex Amadori
2 months
AI experts envision a wide range of outcomes, from out-of-control superintelligent AI causing human extinction to weaker AIs accelerating scientific progress. We look at how expert beliefs form 3 main clusters which we call the Dominance, Extinction and Replacement doctrines.
5
15
48
@testdrivenzen
Alex Amadori
5 months
A post about excuses. If you read this blog, you might be familiar with the concept described in the old lesswrong post "The Bottom Line". That's when you write the "conclusion" of your argument first (say, 2 = 3), then you write the rest of your argument as convincingly as
0
0
0
@testdrivenzen
Alex Amadori
6 months
@ilex_ulmus Some things are subject to network effects. You can't do them by yourself. Coercing others to join is not the only solution - we should have (secular) ways for people to enter strong mutual agreements. The problem is that most things of this shape are prisoner's dilemmas. Think
0
1
0
@testdrivenzen
Alex Amadori
11 months
new post
1
0
0