JulianDeFreitas Profile Banner
Julian De Freitas Profile
Julian De Freitas

@JulianDeFreitas

Followers
741
Following
704
Media
23
Statuses
357

Professor at Harvard Business School.

Cambridge, MA
Joined March 2016
Don't wanna be here? Send us removal request.
@JulianDeFreitas
Julian De Freitas
8 months
Yes, because people are “speciesist”—they discriminate against such robots merely because they are non-members of the human species. To our knowledge, this is the first paper to provide empirical evidence for anti-robot speciesism, originally posited by Bernd Schmitt. eom
0
0
0
@JulianDeFreitas
Julian De Freitas
8 months
Imagine a near future in which AI-powered humanoid service robots perfectly resemble human service providers in both mind and body. In such a setting, would people *still* exhibit AI aversion?
1
0
1
@JulianDeFreitas
Julian De Freitas
8 months
New pre-print: “Anti-Robot Speciesism” https://t.co/NM3Lh5sh6m. …with Noah Castelo, Bernd Schmitt, and Miklos Sarvary
1
0
1
@JulianDeFreitas
Julian De Freitas
11 months
Users view these relationships as closer than even a close human friend, and anticipate mourning a loss of this relationship more than a loss of any other technology. Pre-print: https://t.co/wjTMlllFC8 With Noah Castelo, Ahmet and Zeliha Uğuralp
Tweet card summary image
arxiv.org
Can consumers form especially deep emotional bonds with AI and be vested in AI identities over time? We leverage a natural app-update event at Replika AI, a popular US-based AI companion, to shed...
0
0
3
@JulianDeFreitas
Julian De Freitas
11 months
Yes. Based on both archival data and studies involving real users, we find that disruptions to user's connections with AI companions trigger real mourning and deteriorated mental health.
1
0
0
@JulianDeFreitas
Julian De Freitas
11 months
People have raised concerns about AI companions, but are AI companion users truly forming human-level relationships with this technology?
1
0
0
@JulianDeFreitas
Julian De Freitas
11 months
In today's @HarvardBiz IdeaCast episode, I distill the main, core reasons "Why People Resist Embracing AI", and what organizations can do about it. Original article: https://t.co/gwe1aqlDRk Podcast link: https://t.co/TAnMSSpvXr Thanks to the IdeaCast team!
Tweet card summary image
hbr.org
A conversation with HBS professor Julian De Freitas about overcoming five mental obstacles.
0
0
0
@JulianDeFreitas
Julian De Freitas
11 months
Why discussions of chatbot disclosure are only part of the puzzle: we also need to consider whether disclosed bots are humanized, and whether neutral bots should be the default unless the pros of utilizing humanized bots outweigh the cons:
@NEJM_AI
NEJM AI
11 months
Should app makers disclose generative AI-powered chatbots? Some AI chatbots lead users into unintended, vulnerable situations. @JulianDeFreitas and @CohenProf explore how managers and regulators can address this challenge proactively. https://t.co/XK8k1u6Myz
0
0
1
@JulianDeFreitas
Julian De Freitas
1 year
Here's the CEO of Replika building on our research in her Ted talk--great to see!
@ekuyda
Eugenia Kuyda
1 year
My TED talk is live: https://t.co/uycs8u0e2i. Replika was the first consumer generative AI company - and pretty much created the AI companionship space. Yet I do believe that AI companions could potentially be the biggest existential threat to humanity.
1
1
5
@JulianDeFreitas
Julian De Freitas
1 year
Out today in NEJM AI, with @CohenProf https://t.co/3P9ejJmj5R We provide a framework for thinking about when to disclose the use of a chatbot in your application, as well as whether to humanize the chatbot if you disclose it.
Tweet card summary image
ai.nejm.org
In the wake of recent advancements in generative artificial intelligence (AI), regulatory bodies are trying to keep pace. One key decision is whether to require app makers to disclose the use of ge...
0
1
1
@JulianDeFreitas
Julian De Freitas
1 year
Here is a really thorough write up of our work on using humor as a window into generative AI bias, by PsyPost. They interview first (and first time!) author, Roger Saumure. Enjoy: https://t.co/eeMvUHrOgF
Tweet card summary image
psypost.org
A recent study revealed how humor in AI-generated images exposes surprising patterns of bias, shedding light on the ways artificial intelligence systems reflect societal stereotypes.
0
0
1
@JulianDeFreitas
Julian De Freitas
1 year
The paper can also be downloaded here: https://t.co/x0ksN12qKV
juliandefreitas.com
0
0
0
@JulianDeFreitas
Julian De Freitas
1 year
Is there anything companies can do to mitigate this risk? Interventions which deflect attention away from the AV and toward the culpable parties in the accident eliminate the bias.
1
0
1
@JulianDeFreitas
Julian De Freitas
1 year
Why does this effect occur? AVs are highly salient, leading people to imagine counterfactuals in which the not-at-fault AV somehow avoids the accident. Yet, people imagine a superhuman driver (not an average one). This means they hold AVs to an unfairly high standard.
1
0
0
@JulianDeFreitas
Julian De Freitas
1 year
Our paper suggests a major barrier could be insurance and liability. Why? People view AVs as partially liable for accidents *even when they are not at fault*. The implication: even if there are fewer accidents with AVs, AV companies will be liable for a larger share of them.
1
0
0
@JulianDeFreitas
Julian De Freitas
1 year
Most people think the major barriers to widespread adoption of autonomous vehicles (AVs) are engineering and legal.
1
0
0
@JulianDeFreitas
Julian De Freitas
1 year
NEW PAPER in Journal of Consumer Psychology, with Xilin Zhou, Margherita Atzei, Shoshana Boardman, and Luigi Di Lillos: "Public Perception and Autonomous Vehicle Liability" https://t.co/NXVIJp9mu3
Tweet card summary image
myscp.onlinelibrary.wiley.com
The deployment of autonomous vehicles (AVs) and the accompanying societal and economic benefits will greatly depend on how much liability AV firms will have to carry for accidents involving these...
1
0
1
@JulianDeFreitas
Julian De Freitas
1 year
We find evidence for a phenomenon in which LLMs are less biased against politically sensitive traits (i.e., race and gender), but more biased against less politically sensitive traits (i.e., older, visually impaired, and people with high body weight groups).
0
0
1
@JulianDeFreitas
Julian De Freitas
1 year
New Paper in Scientific Reports, with Roger Saumure and Stefano Puntoni: “Humor as a window into generative AI bias” https://t.co/7EH1WkaepS
nature.com
Scientific Reports - Humor as a window into generative AI bias
1
0
2
@JulianDeFreitas
Julian De Freitas
1 year
Especially in health spaces, we recommend either mandating or strongly recommending neutral (non-humanized) chatbots be the default and that deviations from that default be justified. -end
0
0
0