Sentience Institute
@SentienceInst
Followers
3K
Following
107
Media
38
Statuses
173
Sentience Institute is a nonprofit research organization studying the rise of digital minds and moral circle expansion.
Joined February 2017
Our new paper in the British Journal of Social Psychology describes what people expect in a future with "widespread AI sentience." The average U.S. adult thinks AI welfare will be an important social issue and sentient AI will be exploited for their labor.
sentienceinstitute.org
We are pleased to announce our latest peer-reviewed publication, “World-making for a future with sentient AI” in the British Journal of Social Psychology.
0
4
7
"Average responses to six forecasts (exploiting #AI labour, treating #AI cruelly, using #AI research subjects, #AI welfare, #AI rights advocacy, #AI unhappiness reduction) showed mixed expectations for humanity's future with #AI." @SentienceInst
https://t.co/gPbt3kZbw3
0
3
11
Our 2024 end-of-year blog post is short because we have been neck-deep in research. AI is moving fast, so we need to keep up and help ensure it benefits all sentient beings. As always, we are grateful for your continued support.
sentienceinstitute.org
We are pleased to announce our latest peer-reviewed publication, “World-making for a future with sentient AI” in the British Journal of Social Psychology.
0
1
2
Our new preprint shows the first detailed public opinion data on digital sentience: 76% agree torturing sentient AIs is wrong; 69% support a ban on sentient AI; 63% support a ban on AGI; and a median forecast of 5 years to sentient AI and only 2 to AGI! https://t.co/c2ATXdAUDI
13
25
94
Our new paper #CHI2024 compares 11 features of AIs in how they affect moral concern for the AI's welfare. We found people care most about AIs with physical bodies—so not just an LLM/ChatGPT—and prosocial behavior—we care about an AI who can care about us!
sentienceinstitute.org
Our latest paper “Which Artificial Intelligences Do People Care About Most? A Conjoint Experiment on Moral Consideration” was just presented in Honolulu at CHI, the flagship conference of the field...
0
2
2
In collaboration with the HRI lab at @UChicago, our recent paper published in #HRI2024 provides a taxonomy of robot autonomy that replaces the current simple notion (i.e., more or less human involvement) with six forms that can have very different effects.
sentienceinstitute.org
We recently published “A Taxonomy of Robot Autonomy for Human-Robot Interaction” in collaboration with the Human-Robot Interaction (HRI) lab at the University of Chicago.
0
1
3
Why do some AIs strike fear but other AIs are embraced as close companions? In our #CHI2024 paper we test 11 features in 30,238 AI profiles. People care most about AIs with (i) a human-like body, (ii) prosocial behavior, which seems to mitigate the natural threat we feel from AI.
4
5
13
As AI grows more powerful, measures of AI safety and impact have struggled to keep up. Our new #HRI2024🤖 paper (presented today, best paper nominee) builds "A Taxonomy of Robot Autonomy for Human-Robot Interaction" with 6 forms of autonomy to explain what AI can and can't do. 🧵
2
4
12
Our latest blog post by Justin Bullock and Janet Pauketat summarizes key AI policy insights from the Artificial Intelligence, Morality, and Sentience (AIMS) survey to elucidate public opinion on pivotal AI safety issues. https://t.co/VL9ILtCFlb
0
1
1
In our latest podcast, philosopher Eric Schwitzgebel (@eschwitz) discusses some of his recent papers on the moral status and sentience of AI. https://t.co/ZxXQqB2lmV
0
0
2
This #GivingTuesday, please RT and consider a donation to support our research on digital minds. Every bit helps to understand human-AI interaction and build a better future. What we achieved in 2023: https://t.co/D821JdWjjo Donate: https://t.co/jx3zZjmohm
0
2
5
Our new paper in Social Cognition details two experiments which tested whether perspective taking (perceiving a situation from the point of view of another) can have positive effects in the contexts of animals and intelligent artificial entities. https://t.co/5LOTDq5nqc
0
0
1
What do people think about policies related to sentient AI? - 58% at least somewhat agree with a global ban on the development of AGI smarter than humans. - 32% at least somewhat agree with granting legal rights to sentient robots/AIs.
2
0
6
The median U.S. adult thinks artificial general intelligence (AGI) is only 2 years away and that superintelligence, human-level AI, and sentient AI are only 5 years away. Moreover, 34% think AGI has already been created!
4
1
7
What do people think about sentient AI? From the 2023 nationally representative AIMS supplemental survey (N = 1,099): - 38% of U.S. adults say it’s possible for AIs to be sentient. - 20% say there are currently AIs that are sentient. - 10% say ChatGPT is sentient.
1
0
4
The 2023 nationally representative Artificial Intelligence, Morality, and Sentience (AIMS) survey shows Americans are more alarmed about AI since ChatGPT and have surprisingly high concern for sentient AI rights. Main: https://t.co/WEdd6pdto4 Supplement: https://t.co/TuBNtNUa43
6
9
23
In our latest podcast, philosopher Raphaël Millière (@raphaelmilliere) discusses fundamental questions about the technical and cognitive capacities of large language models. https://t.co/upT1OuqIfi
1
6
8
Our latest report by Bradford Saad explores interactions between large-scale social simulations and catastrophic risks. https://t.co/bXo1Q8ExsD
1
0
0
Governments, farmed animal advocates, and AI safety advocates rely on social influence strategies to intentionally change others’ attitudes and behaviors. Our latest blog post by @janet_pauketat reviews the literature on persuasion and social influence. https://t.co/sXMvblxsoU
0
0
2
Our co-founder, Jacy Reese Anthis, recently spoke with Annie Lowery at The Atlantic about digital minds. You can find links to this conversation and more on our website via the link below. https://t.co/9PmEIf1u1R
0
1
3