
David G. Rand @dgrand.bsky.social
@DG_Rand
Followers
15K
Following
16K
Media
648
Statuses
11K
Prof @MIT - I've left X, you can find me on BlueSky at @dgrand.bsky.social
Cambridge, MA
Joined June 2012
🚨Out in Science!🚨 Conspiracy beliefs famously resist correction, ya? WRONG: We show brief convos w GPT4 reduce conspiracy beliefs by ~20%! -Lasts over 2mo -Works on entrenched beliefs -Tailored AI response rebuts specific evidence offered by believers https://t.co/3Rg79Cx5id 1/
37
176
551
Back to X just to post new PNAS paper for @elonmusk: we find @CommunityNotes flags 2.3x more Republicans than Democrats for misleading posts! The issue is Reps sharing misinformation, not fact-checker bias... https://t.co/qnVNWCUHPt
6
36
123
Many thanks to lead author @captaineco_fr who does amazing work on Community Notes and other topics; and the always-wonderful coauthor Mohsen Mosleh For more of my group's work on misinformation, check out this doc:
docs.google.com
Papers related to misinformation from David Rand and Gordon Pennycook’s research team Key papers The Psychology of Fake News TiCS 2021 [X thread] [15 minute video summary] Durably reducing conspiracy...
0
0
13
CONCLUSION: *Clear partisan diff in misinfo sharing not due to political bias on the part of fact-checkers or academics *Undercuts logic offered by Musk+Zuckerberg for eliminating fact-checkers *Platforms should expect more Rep-sanctioning even when using Community Notes!
3
0
3
We also address two poss confounds: *Is it just that more Reps are using X than Dems, throwing off base rates? No, it's the opposite *It is just that more Dems use CN? No, b/c rated-helpful notes + bridging algorithm require people from both sides rating the notes as helpful
2
0
3
Across all English notes 1/23-6/24: 1.5x more notes proposed on tweets written by Reps than Dems. The partisan diff is MUCH bigger when restricting to "helpful" (ie ~unbiased) notes: 70% on Reps, 30% on Dems. The "vox populi" has concluded that Reps share more misinfo than Dems!
1
0
4
Many studies find Reps do share more misinfo. But those studies could be biased in evaluating what is misinfo. So we analyze Community Notes, where users flag posts & X decides what is misleading using a “bridging algorithm” requiring agreement from users who typically disagree
1
0
3
Accusations of bias against Reps (eg Trump+others suspensions) drove @elonmusk to buy Twitter & gut fact-checking in favor of @CommunityNotes. @finkd did same at @Meta. But greater sanctioning of Reps could just be the result of Reps sharing more misinfo
🚨Out in Nature!🚨 Many (eg Trump JimJordan @elonmusk) have accused social media of anti-conservative bias Is this accurate? We test empirically - and it's more complicated than you might think: conservatives ARE suspended more, but also share more misinfo https://t.co/xVcTUDdCEx
1
0
5
Back to X just to post new PNAS paper for @elonmusk: we find @CommunityNotes flags 2.3x more Republicans than Democrats for misleading posts! The issue is Reps sharing misinformation, not fact-checker bias... https://t.co/qnVNWCUHPt
6
36
123
Today Meta announced it's getting rid of fact-checking to avoid bias. But is there actually evidence of anti-conservative bias? Not really - conservatives get suspended more but also share more low quality news (as judged by politically balanced crowds) ⏬
🚨Out in Nature!🚨 Many (eg Trump JimJordan @elonmusk) have accused social media of anti-conservative bias Is this accurate? We test empirically - and it's more complicated than you might think: conservatives ARE suspended more, but also share more misinfo https://t.co/xVcTUDdCEx
2
5
10
If you had to pick *1* measure of racial attitudes to predict white Americans' Trump support, we find opposition to anti-racism (agreement w/ statements like "People these days can’t speak their minds without someone accusing them of racism") is most correlated
14
65
249
NEW: According to Oxford and MIT researchers @_mohsen_m, @DG_Rand, and @cameron_martel_, social media users are more likely to follow and engage with like-minded accounts that demonstrate similar political views. https://t.co/sRHlGw3UBa
oii.ox.ac.uk
According to Oxford and MIT researchers, social media users are more likely to follow and engage with like-minded accounts that demonstrate similar political views – but to what extent is this driven...
1
11
20
This is a very cool paper by two of my fav people! Many studies have shown that more reflective people believe in God less. This paper shows that *changing* belief (in either direction) is ALSO associated with reflectiveness. Higher CRT = more willing to update beliefs
"On the role of analytic thinking in religious belief change: Evidence from over 50,000 participants in 16 countries" ✍️Michael N Stagnaro & @GordPennycook Key finding: Reflection is associated with belief change independent of the direction of change. https://t.co/GhAs9qEiJz
2
8
48
"On the role of analytic thinking in religious belief change: Evidence from over 50,000 participants in 16 countries" ✍️Michael N Stagnaro & @GordPennycook Key finding: Reflection is associated with belief change independent of the direction of change. https://t.co/GhAs9qEiJz
0
10
37
Are AIs the misinformation machines? Or are we humans the originals? In our latest episode of the Behavioral Design Podcast, @GordPennycook schools @SamuelSalzer and I on all things BS and misinfo. https://t.co/ke0jJ25L8m
creators.spotify.com
The Role of Misinformation and AI in the US Election with Gordon Pennycook In this episode of the Behavioral Design Podcast, hosts Aline and Samuel explore the complex world of misinformation in the...
0
5
16
Feel like passing the time while anxiously waiting for the election? I was on 3 fun podcasts recently #1: You Are Not So Smart with @davidmcraney: https://t.co/pWlDEzIagt Focus is on our recent AI-conspiracy belief paper. Got to join with @DG_Rand & @tomstello_!
youarenotsosmart.com
Our guests in this episode are Thomas H. Costello at American University, Gordon Pennycook at Cornell University, and David G. Rand at MIT who created Debunkbot, a GPT-powered, large language model…
2
4
36
An interview with the scientists who created Debunkbot, an AI/LLM that reliably reduces belief in conspiracy theories via back-and-forth chat: https://t.co/Ma0OInWAI1 ( featuring @tomstello_ - @DG_Rand - @GordPennycook )
youarenotsosmart.com
Our guests in this episode are Thomas H. Costello at American University, Gordon Pennycook at Cornell University, and David G. Rand at MIT who created Debunkbot, a GPT-powered, large language model…
2
10
27
New episode — an interview with the scientists who created Debunkbot, an AI that reliably reduces belief in conspiracy theories via back-and-forth chat, no matter how far down the rabbit hole a person may be: https://t.co/COca6xKrnb (try it at https://t.co/pBRyMlzNHa)
podcasts.apple.com
Science Podcast · Updated Semimonthly · You Are Not So Smart is a show about psychology that celebrates science and self delusion. In each episode, we explore what we've learned so far about reason...
1
8
37
So what have I learnt about #misinformation research? I tried to condense it into a list of the 5 biggest challenges the field faces. Second story in my package of stories about misinformation research is up here (and thread to come): https://t.co/08QKJ03Nma
science.org
The burgeoning field is still grappling with fundamental problems, from getting access to data to defining 'misinformation' in the first place
9
73
170
Great article from @kakape about the five biggest challenges facing misinformation researchers highlighting some of our recent work! Agree with pretty much all these. https://t.co/JOtSi6Fk8a
science.org
The burgeoning field is still grappling with fundamental problems, from getting access to data to defining 'misinformation' in the first place
3
16
53
5 biggest challenges facing misinformation researchers https://t.co/8V2acFbPx8 1) Defining it 2) Everything is political 3) Harms hard to pin down 4) Companies own the data 5) The problem is global I would add (related to #3), what's the direction of causation? Do people use
science.org
The burgeoning field is still grappling with fundamental problems, from getting access to data to defining 'misinformation' in the first place
7
54
98