Thomas Larsen
@thlarsen
Followers
2K
Following
2K
Media
5
Statuses
103
We enjoyed the opportunity for productive discussion with the authors of AI 2027 to find areas of common ground. We are also planning an “adversarial collaboration”.
11
30
247
It turns out that AI Futures and the AI as a normal technology authors have a surprisingly large amount of agreement on AI. -- in the near term, we both expect basically the current trends to continue, probably no AGI within a few years, but progress that's wild in some sense
1
0
2
Who wants more AGI scenarios? I do! My colleague @romeovdean spent three days as a side-project writing up an "AI 2032" scenario. There's also some nice meta-discussion of the writing process and what he learned from it.
16
44
346
I urge everyone to read the actual bill -- the language is a substantial step forwards on both risks from foreign state and nonstate adversaries AND on loss-of-control and scheming risks from rogue AI models. If you told me three months ago we'd see this language on a
Today's new bill from @HawleyMO and @SenBlumenthal moves the debate on AI governance forward with a serious attempt at creating transparency, accountability, and guardrails for AI developers. Our statement: https://t.co/zWIHNr0YjY
3
13
125
Capabilities is almost always the crux IMO. Skeptics don't like to admit it because it is hard/impossible to defend that AIs will always be dumb
@MatthewJBar Do you expect them to end up vastly more than intelligent than humanity, thinking at a speed that makes humans look like statues? If "No", disagreement is mainly about capability levels, not benignness.
0
0
8
It's weird when someone says "this tech I'm making has a 25% chance of killing everyone" and doesn't add "the world would be better-off if everyone, including me, was stopped."
50
54
558
Miles convinced me that there is a lot to like about the EU AI Act. IMO probably still EU governments will be asleep at the wheel during AGI, but stuff like the AI act makes it a bit more likely that they wake up
Only the US can make us ready for AGI, but Europe just made us readier. The EU's new Code of Practice is an incremental but important step towards managing rapid AI progress safely. My new piece explains what it does, why it matters, and why it's not enough.
0
0
10
My column: The One Danger That Should Unite the U.S. and China
nytimes.com
The U.S. and China must agree on a trust architecture for A.I., or rogue entities will destabilize the two superpowers long before they fight a war.
14
30
100
This is actually an argument for fast takeoff! If the weak AGIs will be bottlenecked by getting academic grants, then they have small impact on the world. But then when the vastly superhuman AGIs come that can get around this bottleneck come we'll see discontinuous progress.
This matches my view that AGI will only make our barriers to progress even more bottleneck-y, even more painful. And Europe is probably in the most vulnerable position when it comes to this.
2
0
14
Interesting that the model knew not just that it was in an eval, but the exact task and organization running it
4
6
85
surreal how bad this graph is
3
0
33
We're right on track for the AI 2027 revenue prediction. $12B/yr annualized revenue for OpenBrain projected for Aug 2025
anthropic is suggesting $9 billion of revenue (annualised run-rate) by the end of this year, more than double their previous “optimistic” forecast of $4 billion thank you, claude code
3
1
65
@herbiebradley I haven't written about this much or thought it through in detail, but here are a few aspects that go into my backdrop model: (1) especially in the long-term technological limit, I expect human labor to be wildly uncompetitive for basically any task relative to what advanced
9
8
83