
Iason Gabriel
@IasonGabriel
Followers
5K
Following
9K
Media
87
Statuses
1K
Philosopher & Research Scientist @GoogleDeepMind | Humanity, Ethics & Alignment Team Lead | #TIME100AI | All views are my own
Joined November 2019
1. What are the ethical and societal implications of advanced AI assistants? What might change in a world with more agentic AI?. Our new paper explores these questions:. It’s the result of a one year research collaboration involving 50+ researchers… a🧵
32
202
642
RT @hannahrosekirk: Listen up all talented early-stage researchers! 👂🤖. We're hiring for a 6-month residency in my team at @AISecurityInst….
0
33
0
RT @ma_tay_: Happy that "Value Profiles for Encoding Human Variation" was accepted to EMNLP!. I'm proud of this paper - it combines my inte….
0
2
0
RT @rcbregman: AMAZING result from a 10-year study: a one-time $1,000 cash transfer to households in rural Kenya cut infant mortality by 48….
0
342
0
RT @GoogleDeepMind: As AI agents begin to take action in the real world, it's critical we develop new ethical frameworks to ensure they're….
0
98
0
6/ Deep human-AI relationships create risks of emotional harm and manipulation. We argue that agents must be designed to benefit users, respect their autonomy, demonstrate care & support a flourishing human life. (See also work on "socioaffective alignment w. @hannahrosekirk)
1
4
11
1./ Together with Geoff Keeling, @Arianna_Manzini & @profjamesevans we ask: What is an AI agent?. Our answer: A system that can perceive its environment and act autonomously to achieve a goal. The capacity for real-world action breaks the 4th wall and has profound implications.
3
0
9
Pleased to share our new piece @Nature titled: "We Need a New Ethics for a World of AI Agents". AI systems are undergoing an ‘agentic turn’ shifting from passive tools to active participants in our world. This moment demands a new ethical framework.
33
155
536
RT @NeelNanda5: Take: Chain of Thought is a misleading name. It's really a "scratchpad". "Thoughts" are internal activations. Imagine you'r….
0
37
0
RT @KobiHackenburg: Today (w/ @UniofOxford @Stanford @MIT @LSEnews) we’re sharing the results of the largest AI persuasion experiments to d….
0
129
0
This paper is absolutely essential reading for anyone interested in developing a science of AI safety and evaluation. I esp. appreciate the “principle of parsimony”:. Behaviours should not be attributed to complex mental processes if simpler explanations are available ✅.
In a new paper, we examine recent claims that AI systems have been observed ‘scheming’, or making strategic attempts to mislead humans. We argue that to test these claims properly, more rigorous methods are needed.
0
2
28
We’re hiring a sociological research scientist @GoogleDeepMind!. Work with the inimitable @KLdivergence, @weidingerlaura, @iamtrask, @canfer_akbulut, Julia Haas & many others 🙌.
1
18
70
RT @RuneKvist: Insurance is an underrated way to unlock secure AI progress. Insurers are incentivized to truthfully quantify and track ris….
0
78
0