We have updated our pre-print on using GPT for text analysis.
Our most exciting new finding: GPT-4 Turbo (the updated GPT-4 model released this January) is even better than the prior version of GPT-4 at detecting psychological constructs in text:
🚨 New Working Paper 🚨
Is GPT the future of psychological text analysis?
We test GPT’s (3.5 and 4) ability to detect psychological constructs in text across 12 languages, finding that it’s superior to many existing methods of automated text analysis
🚨 Now out in
@PNASNews
🚨
Analyzing social media posts from news accounts and politicians (n = 2,730,215), we found that the biggest predictor of "virality" (out of all predictors we measured) was whether a social media post was about one's outgroup.
🚨 New Working Paper 🚨
Is GPT the future of psychological text analysis?
We test GPT’s (3.5 and 4) ability to detect psychological constructs in text across 12 languages, finding that it’s superior to many existing methods of automated text analysis
Have you shared fake news on Twitter? I designed an app that will tell you!
It will also tell you how many right-leaning, left-leaning, or hyper-partisan/low-quality news sites you have shared.
Try it out here, and share your score:
🚨Out now in
@NatureHumBehav
🚨
Across 4 experiments (n = 3,364), we found that motivating people to be accurate via a small financial incentive:
-Improved people’s discernment between true and false news
-Reduced the partisan divide in belief
Now out in
@PsychScience
:
Our meta-analysis of all publicly available data on the "accuracy nudge" intervention found that accuracy nudges have little to no effect for US conservatives and Republicans. (1/9)
🚨 New paper in
@PNASNexus
🚨
We found that that following, retweeting, or favoriting low-quality news sources – and being central in a US conservative Twitter network – is associated with vaccine hesitancy (n = 2,064).
Now In Press at Journal of Experimental Social Psychology!
With
@zakijam
and
@leorhackel
, we show across three field experiments (n = 1622) that seeing live theatre improves empathy, changes socio-political attitudes, and leads to prosocial behavior.
🚨New working paper 🚨
We found that people (N = 511) across the political spectrum think that divisive content, misinformation, and moral outrage all go “viral” on social media – but do not think that this type of content *should* go viral.
An experience sampling study found that Twitter/X use is related to decreases in well-being and increases in feelings of outrage. But, in-person social interaction is related to increases in well-being.
New Paper!! Led by the inspiring & amazing
@vicoldemburgo
, with
@felixckc
Twitter (X) use predicts substantial changes in well-being, polarization, sense of belonging, and outrage
/1
Now out in Perspectives on
@PsychScience
People engage with divisive and negative content online. But, does this mean that people *like* divisive content? No! We find that people across the political spectrum do not want divisive content to spread.
Thank you
@Sander_vdLinden
for being such an incredible supervisor!!
And Jamie Druckman &
@leedewit
for all the thoughtful questions in the PhD Viva.
& Congrats
@RakoenMaertens
!!
I am Dr. Steve now 🤓
These results are troubling in an attention economy where the social media business model is based on keeping us engaged in order to sell advertising.
This business model may be creating perverse incentives for polarizing content, rewarding people for "dunking" on the outgroup.
I wrote about how metaphors shape our opinions for
@Quartz
, featuring research by
@GeorgeLakoff
,
@leraboroditsky
, and
@SimoneSchnall
.
"Words matter, and if we are careful with our words, we can use them to make a positive impact."
Are you a US college student who uses Twitter?
You may be eligible for two studies we are conducting. You will earn a $4 Amazon gift card for one study and an $18.50 Amazon gift card for the other.
See if you are eligible here:
And please share this!
.
@zakijam
,
@leorhackel
, and I have a new article in the
@latimes
about our study on the impact of live theatre on empathy and generosity, and why we need to support theatre when it becomes safe to attend it again.
In other words, out-group negativity was a stronger driver of virality than in-group positivity.
Indeed, the “angry” reaction was the most commonly used reaction out of all six of Facebook’s reactions in our datasets.
Our new paper "Attending live theatre improves empathy, changes attitudes, and leads to pro-social behavior" with
@zakijam
and
@leorhackel
is now available & open access.
Read it here:
Now In Press at Journal of Experimental Social Psychology!
With
@zakijam
and
@leorhackel
, we show across three field experiments (n = 1622) that seeing live theatre improves empathy, changes socio-political attitudes, and leads to prosocial behavior.
.
@DanMirea4
and I are hoping to film a tutorial about using the GPT API for text analysis very soon. For now, feel free to use our sample code -- it's super easy to start using GPT with code, and it's *much* more powerful than just using it within the chat interface.
We're launching a global field experiment to try to figure out the causal impact of social media on various psychological outcomes (e.g., polarization, well-being, etc.) across many countries around the world.
Let us know if you are interested in collaborating with us:
🚨Call for collaborators 🚨
We are launching a global study to test the causal impact of social media on psychological outcomes (mental health, polarization) around the world.
If you want to collaborate with us, fill out this form:
Please retweet!
Cool new paper about the sharing of misinformation/conspiracy theories online:
"People are willing to trade-off accuracy for social connections when social rewards are large enough."
Specifically, each additional word about the opposing party (e.g., “Democrat,” “Leftist,” or “Biden” if the post was coming from a Republican) in a social media post increased the odds of that post being shared by 67%.
A new study shows that watching TikTok de-bunking videos (in many cases) seems to work!
It was also cool to see fellow Psychology TikTokers
@Dr_Inna
and
@dr_brein
in the stimuli -- maybe a sign that more scientists should get on TikTok and help fight misinformation :)
New paper out today in HKS Misinfo Review!
We ask whether citizen-led debunking videos are effective at reducing the impact of misinformation.
The answer?!?... sort of! 🧵
with Puneet Bhargava, Katie MacDonald,
@ChristieANewton
&
@hauselin
I'd encourage everyone using the GPT API for text analysis to try the updated GPT models that were released just a week ago (). It makes the API much faster, cheaper, and less buggy.
It’s blows my mind how fast and cheap ChatGPT is for sentiment analysis.
Currently having 300k comments sentiment coded for something pretty complex (“does the comment positively reference consuming meat or animal products”) and it’ll only cost ~$30 and will finish in a day
If polarizing content is bad for Facebook's business (besides just hurting their image), why do they shut down solutions to reduce polarization when they find out these solutions decrease the amount of times people open Facebook (as described below)?
How will the Facebook outage affect people?
This study found that paying people to delete their Facebook accounts reduced political polarization and led to greater subjective well-being.
Negative and moral-emotional words also slightly increased the odds of a post being shared, positive words slightly decreased the odds, and in-group words had no effect.
Out-group words were by far the strongest predictor of virality that we measured.
As an illustration of these perverse incentives, Facebook recently declined to implement features to reduce the amount of harmful content in the news feed because these features also made people open Facebook less.
Falsehoods don't always spread further than the truth. In fact, the opposite pattern is seen on Reddit, where true fact-checked posts have the most engagement.
Falsehoods are common on the social media platform
#Reddit
, but in a study of thousands of Reddit posts with millions of comments, true, fact-checked posts had the most engagement. In PNAS Nexus:
New Op-ED in the
@BostonGlobe
:
@jayvanbavel
and I discuss why polarizing content goes viral, even though most people say they don't *want* it to go viral -- a phenomenon we call the "paradox of social media virality"
Thank you
@Freakonomics
for having me on the podcast to discuss how social media amplifies out-group negativity!
Here are some quick highlights from the episode:
And if Facebook is truly interested in whether social media causes polarization or amplifies harmful content, why do they shut down internal research about Facebook's role in polarization?
Why does out-group animosity go viral, even though most people say they don't want it to go viral?
@g_heltzel
finds that extreme partisans (who engage more with politicians online) prefer tweets that express out-group animosity, but moderates don't:
Interesting that religiously unaffiliated people who believe in “nothing in particular” are the most likely group to hold “new age” beliefs — believing in things like like astrology, reincarnation, or psychics.
Overall, roughly six-in-ten American adults accept at least one of the following New Age beliefs: reincarnation, astrology, psychics and the presence of spiritual energy in physical objects like mountains or trees
Our adversarial collaboration is out!
We find evidence for both teams' pre-registered hypotheses:
1. Accuracy nudges have a small positive impact on sharing discernment across the political spectrum
2. But, they are slightly less effective for those on the political right
🚨Out in Psych Sci🚨
Prompting accuracy can increase news sharing quality -but is this true for those on the political right?
Our ADVERSARIAL COLLABORATION finds:
➡️Acc prompts increase sharing quality of Republicans
➡️Some evidence of greater efficacy for those on left v right
Do you want to use GPT for text analysis in R?
@steverathje2
and I filmed a 15-min tutorial on using the GPT API to perform text analysis tasks (e.g. sentiment analysis or emotion detection) in R.
While dictionary methods (such as LIWC) are very widely used in psychology, GPT-4 is vastly superior at detecting manually-annotated sentiment and discrete emotions (r = 0.66-0.75) as compared to dictionary methods (r = 0.20-0.30)
We argue that social media accelerates existing moral dynamics – amplifying outrage, status seeking, and intergroup conflict, as well as constructive facets of morality, such as social support, pro-sociality, and collective action in our latest paper
'As long as you keep repeating something, it doesn't matter what you say' — Here’s how Donald Trump used the illusory truth effect to alter public opinion and how the media can better tackle false claims without amplifying them
(with
@steverathje2
)
I've been reading "Foolproof" by
@Sander_vdLinden
all weekend, which is out today in the US. It's a super well-written deep-dive into the psychology of misinformation. Highly recommended!
My application to the
@X
API through the Digital Services Act was also denied, after about a five month process featuring delays and
@X
repeatedly asking for follow-up information. It seems like this is a common experience. Has anyone gotten data from
@X
through a
#DSA
request?
My application for researcher access to the
@X
#API
has been denied. 100% of the colleagues I talked to also got it denied, but it's a small sample. Other experiences?
#DSA
🚨New paper in
@ScienceAdvances
! In 7 studies (N=29,096), including an ecologically realistic field study on
@YouTube
, we find that
#prebunking
videos confer strong resistance against 5 manipulation techniques common in
#misinformation
1/12
Large-language models show in-group bias, producing more positive sentences when prompted with words such as "we are" as opposed to "they are"
These biases can be increased if models are fine-tuned with partisan tweets!
They can also be decreased through further fine-tuning
🚨New Preprint: "Generative language models exhibit social identity biases"
Did you know LLMs mirror human-like biases, showing human-levels of ingroup solidarity & outgroup hostility? A thread:
📄
1/7
“Instead of believing that bad things happen for no reason, enemies give us a sense of control, allowing us to attribute bad things to a clear cause that can be understood, contained, and controlled.”
New post for my
@PsychToday
blog
#WordsMatter
Out-group posts were very likely to receive “angry” reactions on Facebook, as well as “haha” reactions (likely indicating mockery), comments, and shares.
Facebook also argues in their response that its platform reflects the "good, the bad, and the ugly" of society.
But, that is not true -- it amplifies the bad and the ugly -- because those things keep us engaged more than the good.
As described below: Facebook is not a neutral public square, or a simple mirror image of society -- it is a slot machine, trying to capture your attention. Negativity, outrage, and dunking on out-groups will be amplified because they capture attention.
For instance, an experiment found that being randomly assigned to deactivate Facebook for four weeks substantially reduced polarization among United States participants:
Being randomly assigned to de-activate Facebook for 4 weeks increased well being and reduced political polarization
Turning off Facebook accounted for a 42% reduction in the increase in polarization that had happened over the past two decades.
Posts about the ingroup received much less overall engagement, although they were slightly more likely to receive “love” and “like” reactions, reflecting in-group favoritism.
In our recent
@PNASNews
paper, we suggested that Facebook's algorithm change in 2018, which gave more weight to reactions/comments, was rewarding posts expressing out-group animosity.
Recent reporting from the
@WSJ
finds that
@Facebook
was aware of this issue.
Are you considering a PhD in psychology? Join
@NYUPsych
for an online panel discussion with faculty & students to learn more about applying and what faculty are researching now.
Learn more & register here:
We do agree with Facebook that our op-ed is not specifically about extremism -- if you read past the headline (which we did not choose), we instead describe in detail our new
@PNASNews
paper about how social media amplifies out-group animosity:
🚨 Now out in
@PNASNews
🚨
Analyzing social media posts from news accounts and politicians (n = 2,730,215), we found that the biggest predictor of "virality" (out of all predictors we measured) was whether a social media post was about one's outgroup.
Replicating prior work, we found that accuracy nudges significantly improved the quality of articles shared for Democrats in nearly all samples, but no significant effects were found for Republicans in *any* of the samples.
However, as described in today's
@nytimes
: "Facebook’s executives were more worried about fixing the perception that Facebook was amplifying harmful content than figuring out whether it actually was amplifying harmful content."
This out-group effect was not moderated by political orientation or by social media platform. However, stronger effects were found among politicians than in the media.
We agree we need more research on social media and polarization and that this is a complex topic, but if Facebook is truly interested in social media's role in polarization, they would make data more accessible to researchers and not shut down internal research on this topic.
We have a new pre-print reviewing why people belief and spread (mis)information in the digital age.
We discuss:
1) the psychology behind misinformation
2) potential solutions for this problem
3) directions for future research
Do people know of any good publicly available datasets of social media posts that have been manually annotated for specific emotions (such as sentiment, various discrete emotions, toxicity, etc.)?
Are you a researcher who studies social media?
Please take our 5-10 minute survey about perceptions of social media algorithms:
We will compare “expert” perceptions to the perceptions of a representative sample of Americans.
People in the path of the 2017 solar eclipse used more pro-social, affiliative, collective, and awe-related language on Twitter.
Cool new paper in
@PsychScience
by
@itsnickyjones
et. al:
People who choose to donate to the most cost-effective charities are perceived as less moral than those who make choices based on empathy.
@AndrsMontealegr
@peez
Calling our analysis "simplistic," Facebook also cites a number of studies indicating that social media doesn't play a role in polarization, most of which we cite in our
@PNASNews
paper. However, they ignored evidence suggesting it does.
You can also see how much fake/low-quality websites that other people with public Twitter handles have shared.
I calculated the fake news "scores" of all US congress-members. Use the app to see which congress-member shares the most low-quality news.
This fits with what we found in our research: negative posts about the out-group tended to receive a lot of angry reactions.
Yet, Facebook's algorithm rated "angry" reactions as 5x more valuable than likes.
Facebook secretly weighted reaction emojis, including "angry," as 5x the value of "likes"--over the integrity team's warnings.
We wrote about the obscure, often arbitrary, human decisions that shape Facebook's algorithm and how we all interact online:
New pre-print!
Across 3 experiments we find while fact-checks of misinformation work on average, they are 52% more likely to backfire when they come from a political outgroup member (& 10% more likely to backfire among political conservatives!) ...see 🧵👇
New study (to be published in PNAS) finds that people who are the most overconfident about their ability to identify fake news are also most likely to fall for it.
I am now the 1480th Steve on the list of scientists named Steve who believe in evolution, aka "Project Steve":
I first wrote to "Project Steve" 4 years ago asking to be added to the list, but they told me I couldn't be added until I got a PhD!
Thank you
@Sander_vdLinden
for being such an incredible supervisor!!
And Jamie Druckman &
@leedewit
for all the thoughtful questions in the PhD Viva.
& Congrats
@RakoenMaertens
!!
I am Dr. Steve now 🤓
Come check out our
#SPSP2021
symposium tomorrow on Psychology in the Social Media Era!
@asherjdm
@VParks
Anandi Ehman and I will be presenting at 11:45am EST.
Want to go viral on
@tiktok_us
?
Hear from our resident TikTok celebrity
@steverathje2
on the importance of
#science
communication & tips for making effective, informative, & engaging content:
👋NEW 📜🧵
With
@ylelkes
& Sam Wolken, I'm excited to release a new paper "The Rise of and Demand for Identity-Oriented Media Coverage."
Conditionally accepted
@AJPS_Editor
URL:
1/
@DG_Rand
@william__brady
Uncivil tweets from congress members get more likes/retweets (the PerspectiveAPI "toxicity" classifier was used to measure incivility here)
@RobbWiller
Since US conservatives/Republicans tend to share substantially more misinformation than liberals/Democrats (see ), the accuracy nudge intervention may have limited effectiveness for the population most likely to spread misinformation.
*New Review* How Can Psychological Science Help Counter the Spread of Fake News?
We ask: Do interventions work? What about long-term effects? How to measure fake news susceptibility? Role of sources? Motivation vs. inattention? Policy insights?
Now out:
However, incentivizing people to identify articles that would be liked by their political in-group before they rated the accuracy of a headline *decreased* accuracy.
Thus, social goals (which are highly salient on social media) appear to interfere with accuracy goals.