jankulveit Profile Banner
Jan Kulveit Profile
Jan Kulveit

@jankulveit

Followers
9K
Following
8K
Media
78
Statuses
1K

Researching x-risks, AI alignment, complex systems, rational decision making at @acsresearchorg / @CTS_uk_av; prev @FHIoxford

Oxford, Prague
Joined September 2014
Don't wanna be here? Send us removal request.
@jankulveit
Jan Kulveit
1 day
We already see some early variants: e/acc thermodynamic god, 'don't be a substrate chauvinist', opposing AI progress puts you on the wrong side of history. Mostly not spreading because they're true
1
0
9
@HoudiniSwap
Houdini Swap
4 hours
Swap anything. Trade everything.
8
5
22
@jankulveit
Jan Kulveit
1 day
The meme pool already contains the ingredients - from 'AIs are our children' to 'historical inevitability' to warmed-over Nietzschean power worship.
1
0
7
@jankulveit
Jan Kulveit
1 day
1. Working on AGI & the risks create a lot of cognitive dissonance. 2. Almost everyone wants to be the hero of their own story -> cultural evolution will find ideologies that resolve this tension
1
0
8
@jankulveit
Jan Kulveit
1 day
Memetics of AI successionism: Cultural evolution will predictably produce ideologies that make humanity's replacement feel virtuous or inevitable, and why these spread even if false.
2
0
13
@AufmuthDavid
LoneStarRed
10 hours
Me pretending I dont know my wife after a flight🙂
0
7
27
@jankulveit
Jan Kulveit
5 days
Hot take, but I think the decision by the organizers of The Curve to on the margin invite less people who understand the topic and more people who have visible power or fame was bad.
2
0
46
@DavidDuvenaud
David Duvenaud
8 days
How might the world look after the development of AGI, and what should we do about it now? Help us think about this at our workshop on Post-AGI Economics, Culture and Governance!
3
13
60
@DavidSKrueger
David Krueger
14 days
If you're interested in gradual disempowerment, consider applying to work with ACS (@jankulveit and @raymondadouglas):
0
5
28
@jankulveit
Jan Kulveit
16 days
AI Psychology & Agent Foundations ML Researcher We need people who can bring technical and methodological rigour, taking high-level ideas about AI psychology and turning them into concrete ML experiments. This could include of evaluations, mech interp, post-training. More
0
0
5
@impact108
shivram shetty
1 month
Millions have joined ImpactSocial dot media For family, friends & community—built to boost engagement + productivity. Still free! Premium <10¢/day. Sponsorship bundles = a gym membership to inspire. Reviewers & influencers—help spread the word!
1
3
89
@jankulveit
Jan Kulveit
16 days
LLM Psychology & Sociology Researcher We want people with a strong intuitive understanding of LLMs to help run empirical studies on topics like LLM introspection and self-conception, LLM social dynamics, and how ideologies spread between AIs.
1
0
7
@jankulveit
Jan Kulveit
16 days
Gradual Disempowerment Research Fellow We're looking for polymaths who can reason about civilizational dynamics. This role comes with a lot of intellectual freedom - it could mean economic modelling, theoretical work on multi-agent dynamics, historical analysis, and more.
1
0
3
@jankulveit
Jan Kulveit
16 days
Deadline in few weeks. These are 1-2 year appointments in Prague, London or San Francisco Bay Area. Hiring page with more details - https://t.co/33CnDhkB20 Form to apply -
1
0
4
@jankulveit
Jan Kulveit
16 days
ACS is hiring researchers to work on LM psychology, and understanding gradual disempowerment - full time roles with autonomy, flexibility, and competitive compensation. We're looking for a mix of polymaths, ML engineers, and people with great intuitions about how AIs behave.
1
6
29
@nsavidesPRO
Nick Savides
3 days
Defending Elon Musk & reevaluating Tom Hanks and Stephen King, an unconventional, freewheeling conversation. This is my latest podcast ep. I made it in hopes of better things, but sometimes forward progress involves honest discussion about contentious topics. I was going to
2
6
15
@jankulveit
Jan Kulveit
22 days
A major hole in the "complete technological determinism" argument is that it completely denies agency, or even the possibility that how agency operates at larger scales could change. Sure, humanity is not currently a very coordinated agent. But the trendline also points toward
1
9
65
@jankulveit
Jan Kulveit
22 days
A major hole in the "complete technological determinism" argument is that it completely denies agency, or even the possibility that how agency operates at larger scales could change. Sure, humanity is not currently a very coordinated agent. But the trendline also points toward
1
9
65
@jankulveit
Jan Kulveit
22 days
This is a fine example of thinking you get when smart people do evil things and their minds come up with smart justifications why they are the heroes. Upon closer examination it ignores key inconvenient considerations; normative part sounds like misleading PR.
@MechanizeWork
Mechanize
23 days
Should we create agents that fully take over people's jobs, or create AIs that merely assist human workers? This is a false choice. Full automation is inevitable, whether we choose to participate or not. The only real choice is whether to hasten the inevitable, or to sit it out.
3
13
146
@jankulveit
Jan Kulveit
25 days
WSJ covering AI successionism, aka people advocating for omnicide because of a confused belief they have solved axiology. The general vibe is of Dunning-Kruger effect: most of the sensible ideas in the space were considered by smart transhumanists like Bostrom and
@WSJ
The Wall Street Journal
26 days
Governments and experts are worried that a superintelligent AI could destroy humanity. For some in Silicon Valley, that wouldn’t be a bad thing, writes David A. Price.
3
8
78
@DavidDuvenaud
David Duvenaud
1 month
Me and @raymondadouglas on how AI job loss could hurt democracy. “No taxation without representation” summarizes that historically, democratic rights flow from economic power. But this might work in reverse once we’re all on UBI. Some highlights: 🧵 https://t.co/ge2ejwgMxv
@TheEconomist
The Economist
1 month
“There is no physical reason why computers and robots can’t eventually become more efficient and capable than humans,” argue Raymond Douglas and @DavidDuvenaud. “The constant demand for progress in this direction makes such a development seem inevitable”
2
10
62
@robertwiblin
Rob Wiblin
3 months
I'm interviewing @DavidDuvenaud, co-author of GRADUAL DISEMPOWERMENT, which argues that AGI could render humans irrelevant, even without any violent or hostile takeover. What should I ask him? Why are or aren't you worried about gradual disempowerment?
29
11
145
@Tyler_RoyalT
Tyler Royal
2 days
Outside
0
2
63
@jankulveit
Jan Kulveit
3 months
Market with a lot of agents biased against humans can make humans "uncompetitive" much faster. Also something like "20% difference" may not look large, but biases of the form "who to buy from" can easily get amplified via network effects.
0
0
4