Peng Liu Profile
Peng Liu

@PengLiu2023

Followers
90
Following
74
Media
1
Statuses
90

Study the relationships between humans, society, and machines (mainly automated vehicles and AI machines) at Zhejiang Uni. ( formally at Tianjin Uni.)

Joined May 2023
Don't wanna be here? Send us removal request.
@PengLiu2023
Peng Liu
8 days
📢 New paper alert! HFE began in domains like the military, aviation & nuclear power—but does it still focus there? We analyzed 2011–2023 publication trends and found notable shifts, plus key journal differences: Ergonomics shows greater global diversity than Human Factors.@HFES
@ergonomics1957
Ergonomics
17 days
Investigating Human Factors and Ergonomics research: a 4S framework: https://t.co/qIUTcrE8qG
0
0
0
@PengLiu2023
Peng Liu
1 month
4/ This paper—"Examining Cross-Cultural Differences in Intelligent Vehicle Agents: Repair Strategies after Their Failures"—was published at AutomotiveUI ’25 and received a Best Paper nomination (1 of 6). 🔗 https://t.co/8FZaUcpW48 #AutomotiveUI #HMI #HumanFactors
dl.acm.org
0
0
0
@PengLiu2023
Peng Liu
1 month
3/ Our findings: Culture shapes not just how IVAs look, but how they “make amends.” 👉 Designing culturally adaptive IVAs is crucial for user experience and trust.
1
0
0
@PengLiu2023
Peng Liu
1 month
2/ We compared 8 IVAs (5 Chinese, 3 Western). --Both used corrective + repair behaviors. --Chinese IVAs more often combined strategies and added intimacy. --Western IVAs leaned on simpler, single behaviors.
1
0
0
@PengLiu2023
Peng Liu
1 month
1/ 🚗🤖 What happens when your car’s AI fails to follow your request? Does it apologize—and if so, how? We studied cross-cultural differences in intelligent vehicle agents (IVAs) and their repair strategies after failures.
1
0
0
@briandavidearp
Brian D. Earp, Ph.D.
3 months
Final call for global collaborators on this replication collaboration on @bioxphi studies — moral psychologists and x-phi researchers wanted! especially based in SOUTHERN HEMISPHERE! global-bioXphi | moral science lab https://t.co/gGRko86kss
0
7
7
@PengLiu2023
Peng Liu
3 months
(1/2) Will a machine be biased against another machine? Turns out yes — GPT-4 (like humans) shows a negative bias toward driverless cars. Not from training data, but maybe from its own “moral reasoning” process.
1
0
0
@bioxphi
Experimental Philosophical Bioethics (BioXPhi)
4 months
🚨 Global BioXPhi Research Initiative 🚨 Assessing Replicability and Cross-Cultural Generalisability of Experimental Philosophical Bioethics (@bioxphi) studies! Exciting initiative led by Ivar Hannikainen (@moralsciencelab) and @briandavidearp (@NUS_CBmE) https://t.co/z5qlvGmTEP
0
5
4
@ddwoods2
David Woods
6 months
here is a new piece, out this morning, on Aviation Safety in the US given recent events. Maybe useful beyond aviation and a reasonable brief explanation of proactive safety (and we never stop needing these). https://t.co/GSl5xoKHgt
Tweet card summary image
thebulletin.org
The magnitude and frequency of aviation incidents have stimulated a search for answers ranging from small fixes to extreme makeovers.
1
1
2
@briandavidearp
Brian D. Earp, Ph.D.
7 months
Friend, tutor, doctor, lover: why AI systems need different rules for different roles
Tweet card summary image
theconversation.com
What we want from AI systems depends on the kind of relationships they are trying to simulate.
1
8
15
@juliansavulescu
Julian Savulescu
8 months
Would you consider chances of developing health conditions or traits when choosing an embryo for IVF? Make your choice and find out more! Try it here ➡️ https://t.co/dosnv7nKSi Article:
Tweet card summary image
uehiro.ox.ac.uk
Researchers at the University of Oxford, University of Exeter, and the National University of Singapore present a new, interactive study: Tinker Tots: A Citizen Science Project to Explore Ethical...
2
9
16
@briandavidearp
Brian D. Earp, Ph.D.
8 months
Note! Please share — New Citizen Science project explores ethical dilemmas in embryo selection | The Uehiro Oxford Institute
Tweet card summary image
uehiro.ox.ac.uk
Researchers at the University of Oxford, University of Exeter, and the National University of Singapore present a new, interactive study: Tinker Tots: A Citizen Science Project to Explore Ethical...
0
5
4
@briandavidearp
Brian D. Earp, Ph.D.
8 months
New pre-print ⚠️ Personalizing AI Art Boosts Credit, Not Beauty https://t.co/FLkOoqRf2B -- proud of student first-author @MaryamAli_Khan for leading this project. We extend previous ... 1/
1
5
9
@iyadrahwan
Iyad Rahwan | إياد رهوان
8 months
It was an honor to give this talk at @FundacionBKT , covering a decade of research from my lab at @medialab & @mpib_berlin @Max_Planck_CHM. I give highlights of my team's work on Machine Behavior, Machine Culture, and AI Ethics. https://t.co/VifTsMC1CZ
1
5
7
@PengLiu2023
Peng Liu
9 months
New paper out: "Machine creativity: Aversion, appreciation, or indifference?", published in Psychology of Aesthetics, Creativity, and the Arts ( https://t.co/Oh2aodWaXR). Thanks for my students and RA @YueyingChu, Yandong Zhao, and Siming Zhai
Tweet card summary image
psycnet.apa.org
Machines such as artificial intelligence (AI) and algorithms are rising in various fields. Here, we explore whether laypeople harbor an aversion to machines in art. This aversion is sometimes...
0
1
2
@m_emilian
Emilian Mihailov
9 months
New pre-print! 🚨 Relational Norms for Human-AI Cooperation || Massive collaborative team of philosophers, psychologists, relationship scientists, AI researchers, lead by @briandavidearp. https://t.co/ugLfBdRTYe
0
5
20
@TahaYasseri
Taha Yasseri
11 months
We have one more postdoc opening for our Trinity-Tu Dublin joint Centre for Sociology of Humans and Machines. Apply if interested in a wide range of topics, looking at sociological aspects of our cohabitation with AI agents! Deadline 17 Jan. Please share! https://t.co/98INORGU1F
0
10
17
@PengLiu2023
Peng Liu
1 year
Many thanks to Jean-François Bonnefon for organizing the special issue Morality and AI in Cognition! ( https://t.co/hWZNdeBicq). Honored to contribute to the SI.
0
0
0
@PengLiu2023
Peng Liu
1 year
Many studies examine whether and why people judge humans and machines differently, often relying on mind perception as an explanation. We propose the human-machine social relationship as an alternative account, linking relational morality to machine morality.
0
0
0