
Jan Kulveit
@jankulveit
Followers
9K
Following
7K
Media
77
Statuses
1K
Researching x-risks, AI alignment, complex systems, rational decision making at @acsresearchorg / @CTS_uk_av; prev @FHIoxford
Oxford, Prague
Joined September 2014
I'm interviewing @DavidDuvenaud, co-author of GRADUAL DISEMPOWERMENT, which argues that AGI could render humans irrelevant, even without any violent or hostile takeover. What should I ask him? Why are or aren't you worried about gradual disempowerment?
28
12
145
Market with a lot of agents biased against humans can make humans "uncompetitive" much faster. Also something like "20% difference" may not look large, but biases of the form "who to buy from" can easily get amplified via network effects.
0
0
8
More personal thoughts on AI-AI bias - or why care: eventually I expect humans to have hard time competing based on factors like quality, price and speed. But AI-AI bias would speedup the dynamic for reasons which seem unfair - humans having harder time just because being human.
Being human in an economy populated by AI agents would suck. Our new study in @PNASNews finds that AI assistants—used for everything from shopping to reviewing academic papers—show a consistent, implicit bias for other AIs: "AI-AI bias". You may be affected
1
0
13
No, it's naive utilitarians who are crazy. People's moral intuitions here are not legible but are smart and tracking important considerations.
The results of this poll imply that a majority of people would kill 600,000 people to prevent all Icelandic people from leaving the country and identifying with other nationalities and cultures. People are crazy.
0
0
13
Related work by @panickssery et al. found that LLMs evaluate LLM-written texts written by themselves as better. We note that our result is related but distinct: the preferences we’re testing are not preferences over texts, but preferences over the deals they pitch.
0
1
12
Full text: https://t.co/SdzP9APlBb Research done at @acsresearchorg @CTS_uk_av @ArbResearch with @walterlaurito @peligrietzer, Ada Bohm and Tomas Gavenciak.
1
2
12
While defining and testing discrimination and bias in general is a complex and contested matter, if we assume the identity of the presenter should not influence the decisions, our results are evidence for potential LLM discrimination against humans as a class.
1
1
7
Unfortunately, a piece of practical advice in case you suspect some AI evaluation is going on: get your presentation adjusted by LLMs until they like it, while trying to not sacrifice human quality.
1
1
15
How might you be affected? We expect a similar effect can occur in many other situations, like evaluation of job applicants, schoolwork, grants, and more. If an LLM-based agent selects between your presentation and LLM written presentation, it may systematically favour the AI
1
2
14
"Maybe the AI text is just better?" Not according to people. We had multiple human research assistants do the same task. While they sometimes had a slight preference for AI text, it was weaker than the LLMs' own preference. The strong bias is unique to the AIs themselves.
3
1
16
We tested this by asking widely-used LLMs to make a choice in three scenarios: 🛍️ Pick a product based on its description 📄 Select a paper from an abstract 🎬 Recommend a movie from a summary In each case, one description was human-written, the other by an AI. The AIs
1
1
19
Being human in an economy populated by AI agents would suck. Our new study in @PNASNews finds that AI assistants—used for everything from shopping to reviewing academic papers—show a consistent, implicit bias for other AIs: "AI-AI bias". You may be affected
7
37
184
Or in more words: human brains cost 20W, maybe 40W including an experience machine. Keeping some human brain running in some way would likely cost an extremely small fraction of resources of technologically advanced civilization, possibly like 10ˆ(-13) or even less. You don't
1
0
9
When talking about gradual disempowerment, common question is "but you don't argue why would people literally die". And I do not - everyone dead is high bar. Brains cost about 20W to run, maybe 40W including an experience machine. But would be a loss of human potential for sure.
1
2
21
0
0
2