jankulveit Profile Banner
Jan Kulveit Profile
Jan Kulveit

@jankulveit

Followers
9K
Following
7K
Media
77
Statuses
1K

Researching x-risks, AI alignment, complex systems, rational decision making at @acsresearchorg / @CTS_uk_av; prev @FHIoxford

Oxford, Prague
Joined September 2014
Don't wanna be here? Send us removal request.
@robertwiblin
Rob Wiblin
26 days
I'm interviewing @DavidDuvenaud, co-author of GRADUAL DISEMPOWERMENT, which argues that AGI could render humans irrelevant, even without any violent or hostile takeover. What should I ask him? Why are or aren't you worried about gradual disempowerment?
Tweet media one
28
12
145
@jankulveit
Jan Kulveit
27 days
Market with a lot of agents biased against humans can make humans "uncompetitive" much faster. Also something like "20% difference" may not look large, but biases of the form "who to buy from" can easily get amplified via network effects.
0
0
8
@jankulveit
Jan Kulveit
27 days
More personal thoughts on AI-AI bias - or why care: eventually I expect humans to have hard time competing based on factors like quality, price and speed. But AI-AI bias would speedup the dynamic for reasons which seem unfair - humans having harder time just because being human.
@jankulveit
Jan Kulveit
1 month
Being human in an economy populated by AI agents would suck. Our new study in @PNASNews finds that AI assistants—used for everything from shopping to reviewing academic papers—show a consistent, implicit bias for other AIs: "AI-AI bias". You may be affected
Tweet media one
1
0
13
@jankulveit
Jan Kulveit
28 days
No, it's naive utilitarians who are crazy. People's moral intuitions here are not legible but are smart and tracking important considerations.
@AndyMasley
Andy Masley
29 days
The results of this poll imply that a majority of people would kill 600,000 people to prevent all Icelandic people from leaving the country and identifying with other nationalities and cultures. People are crazy.
0
0
13
@jankulveit
Jan Kulveit
1 month
Related work by @panickssery et al. found that LLMs evaluate LLM-written texts written by themselves as better. We note that our result is related but distinct: the preferences we’re testing are not preferences over texts, but preferences over the deals they pitch.
0
1
12
@jankulveit
Jan Kulveit
1 month
Full text: https://t.co/SdzP9APlBb Research done at @acsresearchorg @CTS_uk_av @ArbResearch with @walterlaurito @peligrietzer, Ada Bohm and Tomas Gavenciak.
1
2
12
@jankulveit
Jan Kulveit
1 month
While defining and testing discrimination and bias in general is a complex and contested matter, if we assume the identity of the presenter should not influence the decisions, our results are evidence for potential LLM discrimination against humans as a class.
1
1
7
@jankulveit
Jan Kulveit
1 month
Unfortunately, a piece of practical advice in case you suspect some AI evaluation is going on: get your presentation adjusted by LLMs until they like it, while trying to not sacrifice human quality.
1
1
15
@jankulveit
Jan Kulveit
1 month
How might you be affected? We expect a similar effect can occur in many other situations, like evaluation of job applicants, schoolwork, grants, and more. If an LLM-based agent selects between your presentation and LLM written presentation, it may systematically favour the AI
1
2
14
@jankulveit
Jan Kulveit
1 month
"Maybe the AI text is just better?" Not according to people. We had multiple human research assistants do the same task. While they sometimes had a slight preference for AI text, it was weaker than the LLMs' own preference. The strong bias is unique to the AIs themselves.
Tweet media one
3
1
16
@jankulveit
Jan Kulveit
1 month
We tested this by asking widely-used LLMs to make a choice in three scenarios: 🛍️ Pick a product based on its description 📄 Select a paper from an abstract 🎬 Recommend a movie from a summary In each case, one description was human-written, the other by an AI. The AIs
1
1
19
@jankulveit
Jan Kulveit
1 month
Being human in an economy populated by AI agents would suck. Our new study in @PNASNews finds that AI assistants—used for everything from shopping to reviewing academic papers—show a consistent, implicit bias for other AIs: "AI-AI bias". You may be affected
Tweet media one
7
37
184
@ArthurB
Arthur B.
2 months
Tweet media one
23
89
991
@jankulveit
Jan Kulveit
2 months
Or in more words: human brains cost 20W, maybe 40W including an experience machine. Keeping some human brain running in some way would likely cost an extremely small fraction of resources of technologically advanced civilization, possibly like 10ˆ(-13) or even less. You don't
1
0
9
@jankulveit
Jan Kulveit
2 months
When talking about gradual disempowerment, common question is "but you don't argue why would people literally die". And I do not - everyone dead is high bar. Brains cost about 20W to run, maybe 40W including an experience machine. But would be a loss of human potential for sure.
1
2
21
@jankulveit
Jan Kulveit
2 months
0
0
2
@jankulveit
Jan Kulveit
2 months
Tweet media one
4
1
9
@jankulveit
Jan Kulveit
2 months
Tweet media one
1
0
4
@jankulveit
Jan Kulveit
2 months
Tweet media one
1
0
3
@jankulveit
Jan Kulveit
2 months
Tweet media one
1
0
7