thlarsen Profile Banner
Thomas Larsen Profile
Thomas Larsen

@thlarsen

Followers
2K
Following
2K
Media
5
Statuses
103

AI 2027

Joined August 2022
Don't wanna be here? Send us removal request.
@random_walker
Arvind Narayanan
2 days
We enjoyed the opportunity for productive discussion with the authors of AI 2027 to find areas of common ground. We are also planning an “adversarial collaboration”.
11
30
247
@thlarsen
Thomas Larsen
2 days
Link here:
0
0
1
@thlarsen
Thomas Larsen
2 days
It turns out that AI Futures and the AI as a normal technology authors have a surprisingly large amount of agreement on AI. -- in the near term, we both expect basically the current trends to continue, probably no AGI within a few years, but progress that's wild in some sense
1
0
2
@DKokotajlo
Daniel Kokotajlo
8 days
Who wants more AGI scenarios? I do! My colleague @romeovdean spent three days as a side-project writing up an "AI 2032" scenario. There's also some nice meta-discussion of the writing process and what he learned from it.
16
44
346
@ckoopman
Christopher Koopman
29 days
@BenjiBacker
Benji Backer
30 days
At some point we should probably start caring about AI data centers consuming all of our energy and water
10
32
314
@MickBransfield
Mick Bransfield
2 months
This is really bad.
@MickBransfield
Mick Bransfield
2 months
New study of 2,000 national security experts: "national security officials’ intuitions are overwhelmingly overconfident...when study participants estimated that statements had a 90 percent chance of being true, those statements were true just 58 percent of the time."
1
5
17
@David_Kasten
dave kasten
2 months
I urge everyone to read the actual bill -- the language is a substantial step forwards on both risks from foreign state and nonstate adversaries AND on loss-of-control and scheming risks from rogue AI models. If you told me three months ago we'd see this language on a
@americans4ri
Americans for Responsible Innovation
2 months
Today's new bill from @HawleyMO and @SenBlumenthal moves the debate on AI governance forward with a serious attempt at creating transparency, accountability, and guardrails for AI developers. Our statement: https://t.co/zWIHNr0YjY
3
13
125
@thlarsen
Thomas Larsen
2 months
Capabilities is almost always the crux IMO. Skeptics don't like to admit it because it is hard/impossible to defend that AIs will always be dumb
@ESYudkowsky
Eliezer Yudkowsky ⏹️
2 months
@MatthewJBar Do you expect them to end up vastly more than intelligent than humanity, thinking at a speed that makes humans look like statues? If "No", disagreement is mainly about capability levels, not benignness.
0
0
8
@So8res
Nate Soares ⏹️
2 months
It's weird when someone says "this tech I'm making has a 25% chance of killing everyone" and doesn't add "the world would be better-off if everyone, including me, was stopped."
50
54
558
@thlarsen
Thomas Larsen
2 months
Miles convinced me that there is a lot to like about the EU AI Act. IMO probably still EU governments will be asleep at the wheel during AGI, but stuff like the AI act makes it a bit more likely that they wake up
@Miles_M_K
Miles Kodama
2 months
Only the US can make us ready for AGI, but Europe just made us readier. The EU's new Code of Practice is an incremental but important step towards managing rapid AI progress safely. My new piece explains what it does, why it matters, and why it's not enough.
0
0
10
@thlarsen
Thomas Larsen
2 months
Very important article! I hope he's right that it's not just desirable but also likely. IMO the default case is more like AI 2027: the misaligned ASIs take over and negotiate with each other instead of the humans.
@tomfriedman
Thomas L. Friedman
2 months
My column: The One Danger That Should Unite the U.S. and China
1
0
10
@thlarsen
Thomas Larsen
3 months
This is actually an argument for fast takeoff! If the weak AGIs will be bottlenecked by getting academic grants, then they have small impact on the world. But then when the vastly superhuman AGIs come that can get around this bottleneck come we'll see discontinuous progress.
@AlexTPet
Alex Petropoulos 🤠
3 months
This matches my view that AGI will only make our barriers to progress even more bottleneck-y, even more painful. And Europe is probably in the most vulnerable position when it comes to this.
2
0
14
@eli_lifland
Eli Lifland
3 months
Interesting that the model knew not just that it was in an eval, but the exact task and organization running it
4
6
85
@thlarsen
Thomas Larsen
3 months
surreal how bad this graph is
@romeovdean
Romeo Dean
3 months
52.8 > 69.1 = 30.8
3
0
33
@romeovdean
Romeo Dean
3 months
52.8 > 69.1 = 30.8
7
6
154
@ESYudkowsky
Eliezer Yudkowsky ⏹️
3 months
13
10
465
@thlarsen
Thomas Larsen
4 months
We're right on track for the AI 2027 revenue prediction. $12B/yr annualized revenue for OpenBrain projected for Aug 2025
@morqon
morgan —
4 months
anthropic is suggesting $9 billion of revenue (annualised run-rate) by the end of this year, more than double their previous “optimistic” forecast of $4 billion thank you, claude code
3
1
65
@thlarsen
Thomas Larsen
4 months
I found this pretty helpful for understanding this bill
@Miles_M_K
Miles Kodama
4 months
I made a website to explain SB 53, a proposed California state law that would require large AI developers to be more transparent about their safety practices. The site shows you the bill text enriched with explanatory annotations.
0
0
6
@jkcarlsmith
Joe Carlsmith
4 months
@herbiebradley I haven't written about this much or thought it through in detail, but here are a few aspects that go into my backdrop model: (1) especially in the long-term technological limit, I expect human labor to be wildly uncompetitive for basically any task relative to what advanced
9
8
83