thlarsen Profile Banner
Thomas Larsen Profile
Thomas Larsen

@thlarsen

Followers
2K
Following
2K
Media
5
Statuses
93

AI 2027

Joined August 2022
Don't wanna be here? Send us removal request.
@thlarsen
Thomas Larsen
5 days
Very important article! . I hope he's right that it's not just desirable but also likely. IMO the default case is more like AI 2027: the misaligned ASIs take over and negotiate with each other instead of the humans.
@tomfriedman
Thomas L. Friedman
6 days
My column: The One Danger That Should Unite the.U.S. and China
1
0
10
@thlarsen
Thomas Larsen
26 days
This is actually an argument for fast takeoff! . If the weak AGIs will be bottlenecked by getting academic grants, then they have small impact on the world. But then when the vastly superhuman AGIs come that can get around this bottleneck come we'll see discontinuous progress.
@AlexTPet
Alex Petropoulos 🤠
27 days
This matches my view that AGI will only make our barriers to progress even more bottleneck-y, even more painful. And Europe is probably in the most vulnerable position when it comes to this.
Tweet media one
2
0
15
@thlarsen
Thomas Larsen
1 month
RT @eli_lifland: Interesting that the model knew not just that it was in an eval, but the exact task and organization running it https://t.….
0
6
0
@thlarsen
Thomas Larsen
1 month
surreal how bad this graph is.
@romeovdean
Romeo Dean
1 month
52.8 > 69.1 = 30.8
Tweet media one
3
0
34
@thlarsen
Thomas Larsen
1 month
RT @romeovdean: 52.8 > 69.1 = 30.8
Tweet media one
0
6
0
@thlarsen
Thomas Larsen
1 month
0
10
0
@thlarsen
Thomas Larsen
1 month
We're right on track for the AI 2027 revenue prediction. $12B/yr annualized revenue for OpenBrain projected for Aug 2025
Tweet media one
@morqon
morgan —
1 month
anthropic is suggesting $9 billion of revenue (annualised run-rate) by the end of this year, more than double their previous “optimistic” forecast of $4 billion. thank you, claude code.
3
1
64
@thlarsen
Thomas Larsen
1 month
I found this pretty helpful for understanding this bill.
@Miles_M_K
Miles K
1 month
I made a website to explain SB 53, a proposed California state law that would require large AI developers to be more transparent about their safety practices. The site shows you the bill text enriched with explanatory annotations.
Tweet media one
0
0
6
@thlarsen
Thomas Larsen
2 months
RT @jkcarlsmith: @herbiebradley I haven't written about this much or thought it through in detail, but here are a few aspects that go into….
0
8
0
@thlarsen
Thomas Larsen
2 months
RT @hlntnr: Spearphishing PSA—looks like there's a concerted attack on AI safety/governance folks going around. Be wary of calendar links v….
0
51
0
@thlarsen
Thomas Larsen
2 months
I like thinking for myself, so I try to never defer to anyone. But if I did, I'd defer to Ryan. Worth listening to, many important considerations discussed here.
@robertwiblin
Rob Wiblin
2 months
Ryan Greenblatt is lead author of "Alignment faking in LLMs" and one of AI's most productive researchers. He puts a 25% probability on automating AI research by 2029. We discuss:. • Concrete evidence for and against AGI coming soon.• The 4 easiest ways for AI to take over.•
1
2
44
@thlarsen
Thomas Larsen
2 months
The main sycophancy threat model is that humans are imperfect raters, and so training AIs with human feedback will naturally lead to the AIs learning to produce outputs that look good to the human raters, but are not actually good. This is pretty clear in the AI safety.
@random_walker
Arvind Narayanan
2 months
A few people have asked me if a technical fix for AI model sycophancy is on the cards. In fact, a technical fix for sycophancy is trivial. In many cases all it would take is a tweak to the system prompt. The reason companies are struggling to get this right is not technical.
2
9
74
@thlarsen
Thomas Larsen
2 months
I agree with Eli that these are important areas. But IMO the most important jobs in the world probably aren't on this list, instead, they are things like:.- Starting a new org to fill a huge whole in the AI safety ecosystem. - Getting a job that could impact the overall USG.
@eli_lifland
Eli Lifland
2 months
There are many ways to use one's career to help AGI go better, here we list some of the top ones.
Tweet media one
2
1
54
@thlarsen
Thomas Larsen
2 months
RT @eli_lifland: Since AI 2027 people have often asked us what they can do to make AGI go well. I've just published a blog post covering:.(….
0
49
0
@thlarsen
Thomas Larsen
2 months
Want to get up to speed on AI? My top recommendations are: .- AI 2027 .- Situational Awareness .- AGI Ruin: A List of Lethalities.- OpenAI Email Archives (from Musk v Altman) .- Binging all the AI-related Dwarkesh podcast episodes.
7
6
134
@thlarsen
Thomas Larsen
2 months
Best Congressional AI hearing so far IMO. Great questions all around. I appreciated this one in particular, which was focused on the core issue of automated AI R&D.
@RepNateMoran
Congressman Nathaniel Moran
2 months
We must urgently assess how far Chinese AI systems have come—and work with U.S. industry to contain the risks of automated AI R&D. Because once an AI starts improving itself, the race changes entirely.
0
0
12
@thlarsen
Thomas Larsen
2 months
Great post on one of the AI 2027 TTXs! . I strongly agree with "The biggest threat to a rogue AI is … other AI?".
Tweet media one
@sjgadler
Steven Adler
3 months
New post!. A crisis simulation changed how I think about AI risk
Tweet media one
4
3
41
@thlarsen
Thomas Larsen
3 months
RT @BethMayBarnes: I had a lot of fun chatting with Rob about METR's work. I stand by my claims here that the world is not on track to keep….
0
33
0
@thlarsen
Thomas Larsen
3 months
This claim from "AI as a normal technology" is clearly wrong, and I'm disappointed that it has gotten so much traction. 1. A lower bound for the capabilities of ASI is like a human, but sped up by a factor of 100x and working 24/7. 2. This would already clearly be
Tweet media one
2
4
87