Joshua Clymer Profile
Joshua Clymer

@joshua_clymer

Followers
2K
Following
792
Media
44
Statuses
429

Turtle hatchling trying to make it to the ocean. I work at Redwood Research.

Joined April 2022
Don't wanna be here? Send us removal request.
@joshua_clymer
Joshua Clymer
8 days
I'm confused why global AI coordination is so strongly associated with a concentration of power. Global coordination => no single leader => diffusion of power?.
@hlntnr
Helen Toner
8 days
Wait, I finally listened to the Peter Thiel Antichrist interview, and the Antichrist part is literally just the same as my dynamism essay. ?. Peter Thiel me. đŸ€.Totalitarianism is not a great solution to AI risks, actually
Tweet media one
Tweet media two
Tweet media three
2
0
14
@joshua_clymer
Joshua Clymer
15 days
Which phrase for "an international AI slowdown before superintelligence" would you want to be popularized?.
5
0
5
@joshua_clymer
Joshua Clymer
1 month
RT @inferencemag: Inference is hosting some of the world’s leading experts for a debate on the possibility and potential consequences of au
.
0
23
0
@joshua_clymer
Joshua Clymer
2 months
It's not obvious to me that developers will need this level of precision. But I think it's helpful to write the most thorough version of a safety case as a starting point for simplifications and relaxations.
0
0
3
@joshua_clymer
Joshua Clymer
2 months
We err on the side of crossing i's and dotting t's—for instance, including quantitative risk estimates and models of uplift. See this fun interactive website that explains our model:.
1
0
3
@joshua_clymer
Joshua Clymer
2 months
How do we know if AI systems are safe from misuse?. The short answer: have a red team try to misuse them, and measure the effort required. But the devil is in the details—which I dive into in a safety case written with @_robertkirk and others.
@_robertkirk
Robert Kirk
2 months
New paper! With @joshua_clymer, Jonah Weinbaum and others, we’ve written a safety case for safeguards against misuse. We lay out how developers can connect safeguard evaluation results to real-world decisions about how to deploy models. đŸ§”
Tweet media one
5
2
22
@joshua_clymer
Joshua Clymer
2 months
Principles for a positive ASI future:.- People maintain at least their current standards of health, living conditions, and political representation. - The benefits created by ASI are distributed so as to promote equitable human empowerment. - Humans can control the resources at.
2
2
20
@joshua_clymer
Joshua Clymer
2 months
The greatest AI security threat isn’t stealing secrets—it’s planting secret loyalties into models. Whoever controls the AI that seeds an intelligence explosion controls the future.
4
1
14
@joshua_clymer
Joshua Clymer
2 months
no matter what you think about ai alignment, we'll eventually need gov oversight of AI to preserve democracy. otherwise, there's no way to stop ASI from manipulating voters. "If I don't do it, my competitor will." those will be the last words of the free world.
10
2
48
@joshua_clymer
Joshua Clymer
2 months
do you think plan A should be to coordinate a multi-year global slowdown of AI before ASI?.
19
0
18
@joshua_clymer
Joshua Clymer
2 months
RT @AliciaP59828402: Guys I'll read the book but pleaseee - you still have time to change the cover art! đŸ˜©
Tweet media one
0
5
0
@joshua_clymer
Joshua Clymer
2 months
this is bad news. the more nations have lots of AI chips the more difficult coordinating a multi-year slowdown will be.
@Miles_Brundage
Miles Brundage
2 months
The US is making the UAE and Saudi Arabia into great AI powers of their own (alongside the US and China) for little apparent benefit other than a few people getting very rich.
2
4
41
@joshua_clymer
Joshua Clymer
2 months
I'm glad for more serious investigation of existential risk from think tanks, but I think ASI will be much better at identifying paths to human extinction than these authors. I wish they did not make sweeping claims like "Extinction threats posed by AI are immensely challenging".
@DavidSKrueger
David Krueger
2 months
A new RAND report on AI x-risk is shockingly bad; I don't see how it got past their internal peer review. There are many issues, but the main critical flaw is the conflation of "It seems hard to me" with "It will be hard for a superintelligent AI". Other issuse:.- Not grappling
Tweet media one
0
1
12
@joshua_clymer
Joshua Clymer
2 months
figure out sleep. figure out stimulants. if only i knew earlier how important those goals are.
1
0
13
@joshua_clymer
Joshua Clymer
2 months
(this is a real question, not a rhetorical one).
0
0
1
@joshua_clymer
Joshua Clymer
2 months
What's preventing companies from reducing the time to deployment?. In the early bend of an intelligence explosion, I hope companies deploy faster than they do now (so the world can see the crazy amount of progress happening). Right now the lag seems to be 4+ months. Why so long?.
6
0
16
@joshua_clymer
Joshua Clymer
2 months
AI companies can potentially share internal misalignment incidents without hurting their reputation -- just give them to a third party to publish who will anonymize their origin.
1
1
28
@joshua_clymer
Joshua Clymer
3 months
RT @NeelNanda5: New post: I'm all for investment in interpretability but IMO this overstates its importance vs other safety methods. I disa
.
0
30
0
@joshua_clymer
Joshua Clymer
3 months
RT @DKokotajlo: Indeed. Good luck to us all.
0
1
0
@joshua_clymer
Joshua Clymer
3 months
There's an argument for setting a norm of caution early. I'm not arguing against that. I'm arguing against restricting deployment. ASI poses a pretty similar x-risk whether it is deployed or not. The norm should be "cautious about development, and quick about deployment".
1
0
18