Jennifer Allen Profile
Jennifer Allen

@_JenAllen

Followers
752
Following
326
Media
18
Statuses
90

PhD Student, MIT Sloan

Joined September 2019
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
@_JenAllen
Jennifer Allen
2 years
🚨Paper alert!🚨Birds of a Feather Don't Fact-check Each Other #CHI2022 On Twitters crowdsourced factcheck program @birdwatch , partisanship is king: users mostly flag counterpartisans' tweets as misleading & rate their notes as unhelpful 1/
Tweet media one
Tweet media two
6
128
458
@_JenAllen
Jennifer Allen
13 days
New in @ScienceMagazine What FB news drove COVID vax hesitancy in US? False misinfo? Not so much: We find unflagged ‘vax-skeptical’ news had *46X larger* impact than flagged misinfo Why? Flagged misinfo had bigger impact when seen, but ~100x fewer views
Tweet media one
Tweet media two
Tweet media three
Tweet media four
11
118
336
@_JenAllen
Jennifer Allen
1 month
I'm excited to share that I'll be joining @NYUStern as an Assistant Professor in the Technology, Operations, and Statistics group and faculty affiliate of @CSMaP_NYU starting in 2025, after a post-doc at Penn with @csspenn !
@CSMaP_NYU
NYU's Center for Social Media and Politics
1 month
📢 CSMaP is thrilled to announce that we’ve recruited two new core faculty members — @_JenAllen and @cbarrie — to NYU to join the Center and advance our research agenda! Please join us in welcoming Jenny and Chris! 🥳👏 Our announcement:
Tweet media one
2
5
40
25
10
180
@_JenAllen
Jennifer Allen
1 year
Our paper on the effects of digital advertising on voter turnout is out today in NHB! This paper is the result of a truly herculean effort from so many people (running and analyzing a 2mil person field experiment on FB is *hard*) -- check it out!
@SolomonMg
Sol Messing
1 year
🚨MASSIVE NEW STUDY ON DIGITAL/FB POLITICAL AD EFFECTS 🚨 in @NatureHumBehav from Minali Aggarwal, @_JenAllen , @aecoppock , @dfrankow , Kelly Zhang, @jimmyeatcarbs , Andrew Beasly, Harry Hantman, Sylvan Zheng! Ungated:
Tweet media one
9
129
386
2
24
81
@_JenAllen
Jennifer Allen
4 years
New working paper by me, @AaArechar @GordPennycook and @DG_Rand !
@DG_Rand
David G. Rand
4 years
🚨Working paper alert!🚨 "Scaling up fact-checking using the wisdom of crowds" We find that 10 laypeople rating just headlines match performance of professional fact-checkers researching full articles- using set of URLs flagged by internal FB algorithm
Tweet media one
12
85
198
0
9
39
@_JenAllen
Jennifer Allen
13 days
Bonus tweet -- in funny bit of timing, this paper comes out the same day as my graduation! (Pic feat. my very offline parents)
Tweet media one
1
2
30
@_JenAllen
Jennifer Allen
3 years
New paper in HKS Misinfo Review! We show how thresholding on 100 public shares introduces bias in Facebook's @SocSciOne dataset. tl;dr -- the threshold causes the percentage of clicks to mainstream and fake news on Facebook to be overestimated by 2-4X.
@DavMicRot
David Rothschild 🌻
3 years
In this paper w/ @_JenAllen @duncanjwatts @markusmobius we explore its 1st major data release, testing how key outcomes researchers want to test (eg fake news) change due to introduction of differential privacy & censoring to URLs w/ 100+ public shares 2/8
1
24
64
0
10
19
@_JenAllen
Jennifer Allen
2 years
Data is very prelim., but suggestive that desire to call out *actually bad* content by the opposing party could be an important motivator driving people to participate in Birdwatch in the first place. It's not all trolling! 🥳 13/
1
2
15
@_JenAllen
Jennifer Allen
13 days
So misinfo was persuasive – but did people see it? Not so much! During Q1 2021, URLs flagged by FB’s fact-checking program received ~9 million views -- just 0.3% of vax-related views. Similarly, links to low-quality news sites accounted for just 5.1% of views
Tweet media one
Tweet media two
1
2
15
@_JenAllen
Jennifer Allen
2 years
Results were stark! Ds were 2X, and Rs were 3X, more likely to flag a tweet by a counterpartisan. Even though the majority of tweets (90%!) were classified misleading, bwatchers labeled many more co-partisan tweets as NOT misleading than counterpartisan ones 10/
Tweet media one
1
2
12
@_JenAllen
Jennifer Allen
2 years
Some background: in Jan 2021, Twitter announced a new crowdsourced fact-checking project, @birdwatch , and invited users to apply to become ~official~ keyboard warriors and fight misinfo on the platform 2/
@Support
Support
3 years
🐦 Today we’re introducing @Birdwatch , a community-driven approach to addressing misleading information. And we want your help. (1/3)
8K
5K
16K
1
1
11
@_JenAllen
Jennifer Allen
2 years
We're thankful for receiving a Best Paper Honorable Mention from #CHI2022 , and @_JenAllen & @Cameron_Martel_ will be at the conference - so plz reach out if you'll also be in attendance & would like to chat! 15/
1
1
11
@_JenAllen
Jennifer Allen
2 years
So is BWatch doomed? Maybe not! We had 2 fact-checkers vet a subset of the tweets, and found that among the 57 tweets flagged as misleading by a majority of bwatchers, 86% were also rated as misleading by fact-checkers. 12/
3
1
10
@_JenAllen
Jennifer Allen
4 years
@DavMicRot
David Rothschild 🌻
4 years
New paper by me, @duncanjwatts , Jennifer Allen, Baird Howland, Markus Mobius "Evaluating the fake news problem at the scale of the information ecosystem" @ScienceAdvances Three key points (1) Most Americans consume very little news ...
10
55
133
1
0
9
@_JenAllen
Jennifer Allen
4 years
@DG_Rand @andyguess @kmmunger @deaneckles @SocSciOne We’ve done some work comparing the prevalence of fake news in the SS1 dataset to a smaller dataset that tracks individual level referral traffic from Facebook and found that SS1 overestimates the amount of fake news, likely due to the 100 share threshold.
2
1
9
@_JenAllen
Jennifer Allen
13 days
People mostly saw links from mainstream/reputable sites– but some of this content was misleading+hesitancy inducing Mainstream stories covering rare deaths following vaccination attracted *massive* viewership on FB during the initial vaccine rollout, and weren’t flagged by FB
Tweet media one
Tweet media two
1
1
9
@_JenAllen
Jennifer Allen
2 years
We ran a random forest model predicting tweet misleadingness and found models that used just content features did barely better than chance. But models w just political features of the ppl involved had an AUC of almost .85, w little add'l benefit of more features 7/
Tweet media one
1
2
9
@_JenAllen
Jennifer Allen
2 years
Accepted users could participate in 2 ways: 1. Notes - users flag tweets as misleading or not and write a summary explaining why 2. Ratings - users upvote or downvote other birdwatchers notes' as helpful or not More ex. found here: 3/
Tweet media one
1
1
8
@_JenAllen
Jennifer Allen
2 years
Also impt- this research is just looking at v0 of bwatch. Twitter has been constantly iterating to better the program, and we are grateful to the team for sharing data and being open to collaboration from academics. Check out their data here: 14/
1
2
8
@_JenAllen
Jennifer Allen
2 years
We also had questions. Past work we've done shows that crowds can do a good job flagging misinfo when platforms control what content ppl rate: But what happens when users choose which content to evaluate? Does partisan cheerleading take over? 5/
@DG_Rand
David G. Rand
4 years
🚨Working paper alert!🚨 "Scaling up fact-checking using the wisdom of crowds" We find that 10 laypeople rating just headlines match performance of professional fact-checkers researching full articles- using set of URLs flagged by internal FB algorithm
Tweet media one
12
85
198
1
1
8
@_JenAllen
Jennifer Allen
13 days
This raises difficult policy qs – if simply moderating false content is not enough, how should platforms balance freedom of expression and potential harm? There are no easy answers, but our paper presents a framework for quantifying harm to better understand tradeoffs involved
2
0
8
@_JenAllen
Jennifer Allen
1 month
Thank you to everyone at @MITSloan (especially my advisor @DG_Rand ) for the support during my PhD journey. Can't wait to see what the future holds!
0
0
8
@_JenAllen
Jennifer Allen
2 years
Ratings data were even more striking Ds rated 83% of notes by fellow Ds helpful, vs. 43% by Rs Rs rated 87% of notes by Rs helpful, vs. 26% by Ds 11/
Tweet media one
2
1
8
@_JenAllen
Jennifer Allen
13 days
Our randomized survey exps assessed 130 headlines' causal effect on vax intentions Fact-checked misinfo *did* lower vax intentions by ~1.5pp, sig more than accurate content - BUT the best predictor of persuasion was whether the headline implied the vax was harmful – not falsity
Tweet media one
1
2
9
@_JenAllen
Jennifer Allen
2 years
To answer, we got tweets, notes, and ratings from the first 6 mo of @birdwatch . We examined predictive power of i) content features, related to the content of tweet or note vs ii) context features, related to user-level attributes of the bwatcher or tweeter (eg partisanship) 6/
Tweet media one
1
1
7
@_JenAllen
Jennifer Allen
2 years
Responses to the announcement were, uh, mixed (see prototypical example below). Can Birdwatch separate the truth from the trolling? 4/
@ThatBeardoAdam
Adam.
3 years
@TwitterSupport @birdwatch This isn’t going to help. Who’s to say the people making notes are reliable and not misleading themselves? You’re inviting bias and allowing it to shape narrative.
12
16
208
1
1
7
@_JenAllen
Jennifer Allen
13 days
When we aggregate the impact across all URLs that induced hesitancy, we see that exposure wins out Flagged misinfo was estimated to lower vax intentions by ~.05pp per FB user, compared to an effect of ~2.3pp for unflagged vax-skeptical content – a 46X diff
2
0
7
@_JenAllen
Jennifer Allen
2 years
Same was true for a model predicting note helpfulness. Here, using the context features alone produced a decent model w AUC ~ .75. But just political features gets to an AUC of ~.9! 8/
Tweet media one
1
1
7
@_JenAllen
Jennifer Allen
13 days
This single URL from the Chicago Tribune was seen by 55 million ppl – 6X the total of flagged misinfo It was factually accurate but - given tenuous link b/t vax & death, & the millions of people who did not suffer serious side effects – misleadingly implied vax was dangerous
Tweet media one
1
1
6
@_JenAllen
Jennifer Allen
2 years
Clearly, we needed to take a closer look at this relationship b/t partisanship & participation in bwatch 9/
1
1
6
@_JenAllen
Jennifer Allen
13 days
BUT, when we weight each headline’s persuasive effect by its number of views, we see a very different picture. The small fraction of unflagged stories that were almost as negatively persuasive as flagged misinfo were seen *vastly* more times than the flagged misinfo
Tweet media one
Tweet media two
1
0
7
@_JenAllen
Jennifer Allen
13 days
The harms could be large Assuming a .6 scaling factor between vax-intentions and uptake (as suggested by Athey et al 2023), a back-of-the-envelope calc suggests that had this vax-skeptical content not spread, there would have been >3 million add’l 💉- and many lives saved
1
1
6
@_JenAllen
Jennifer Allen
12 days
@danwilliamsphil @NeilLevy10 Thanks for reading and the kind words! I’m a big fan of your work - definitely influenced the paper and my thinking on the topic. Very interested to see the paper you mention when it’s posted
0
0
5
@_JenAllen
Jennifer Allen
6 months
@yangyunkang @MattHindman @Sander_vdLinden @RVAwonk @ThinkerAspiring @STWorg @jayvanbavel @GordPennycook Interesting! Who were the coders? Is there a repository of the data somewhere where we can see the images and labels? (Apologies if this is in the paper - I can’t access via mobile right now but I’m just curious)
0
0
4
@_JenAllen
Jennifer Allen
13 days
Fake news on social media has been correlationally linked to many societal challenges. Eg - Biden claimed FB was “killing people” by carrying vax misinfo on the platform. But what is the causal impact of misinfo, esp compared to other vax-related content?
1
0
8
@_JenAllen
Jennifer Allen
13 days
Work could not have been done without my co-authors @DG_Rand and @duncanjwatts , as well as the helpful feedback from our anonymous reviewers and many members of the research community (too many to name – but thank you!) #ScienceResearch
1
0
4
@_JenAllen
Jennifer Allen
13 days
We shed light on this q. Our logic: To broadly impact society, (mis)info must be 1) widely seen & 2) persuasive Using this framework, we combined survey exps with viewership data on 13K vaccine-related URLs popular on FB in Q1 2021 to estimate impacts on US vax hesitancy
1
0
5
@_JenAllen
Jennifer Allen
3 years
@aecoppock The authors also write this nytimes article with some pretty nice data viz:
0
0
4
@_JenAllen
Jennifer Allen
3 years
@deaneckles @duncanjwatts Huh I keep seeing that headline but can't find the actual paper anywhere! The article suggests that they just used CrowdTangle to measure engagement on posts from public pages, which misinfo peddlers use more -- see:
1
0
4
@_JenAllen
Jennifer Allen
1 month
@diegoreinero_ @NYUStern @CSMaP_NYU @csspenn For sure that'd be great! I'll be on campus in September
0
0
3
@_JenAllen
Jennifer Allen
13 days
The resulting distribution of predicted treatment effects shows that flagged misinfo *was* sig more harmful than unflagged content when people saw it, just like in our experiments
Tweet media one
Tweet media two
1
0
3
@_JenAllen
Jennifer Allen
3 years
@deaneckles @duncanjwatts Anyway the headline seems implausible to me but hard to tell exactly what's up without the paper
1
0
3
@_JenAllen
Jennifer Allen
12 days
@VictorLamme @mboudry @ScienceMagazine @DG_Rand @duncanjwatts Thanks for reading! Understand the concern - we’re def not advocating for government censorship. There are other potential solutions like @CommunityNotes that might limit harm in a more democratic way. Much more follow up work to do!
0
0
2
@_JenAllen
Jennifer Allen
13 days
Some limitations - We only measure vax intentions, not actual uptake - Our exps were run at a different time (mid-2022) than our exposure data (early 2021). We check robustness, but ideally would have happened at the same time - FB is diff from survey enviro, esp WRT attention
1
0
2
@_JenAllen
Jennifer Allen
2 years
Read the always brilliant @sukhigulati breaking down the importance of interoperability in maintaining the open internet!
@sukhigulati
Sukhi
2 years
Got the chance to write for @CenDemTech with @MalloryKnodel on the importance of interoperable design:
0
2
7
0
0
2
@_JenAllen
Jennifer Allen
4 years
@DG_Rand @andyguess @kmmunger @deaneckles @SocSciOne The magnitude of the difference is something we’re still nailing down with help from Facebook. Hopeful that we’ll have a more exact answer very soon! Agree that @deaneckles ‘s proposed solution would be super helpful for researchers working with the SS1 data
0
0
2
@_JenAllen
Jennifer Allen
3 years
@SolomonMg That being said first release is great! And a sign of good faith that fake news seems overrepresented.
0
0
1
@_JenAllen
Jennifer Allen
1 month
0
0
0
@_JenAllen
Jennifer Allen
3 years
@deaneckles @duncanjwatts Ah well that explains why I can’t find it!
0
0
1
@_JenAllen
Jennifer Allen
3 years
@SolomonMg So reasonable to assume that the SS1 data might also have a blind spot when it comes to understanding paid influence campaigns or the viewership of political ads, for example.
1
0
1
@_JenAllen
Jennifer Allen
3 years
@SolomonMg The 100 share threshold makes sense for privacy reasons, but has other non-obvious side effects, e.g. retail clicks / views (e.g. Amazon links) from ads that are not shared publicly but that appear in people’s feeds seem underrepresented in SS1.
1
0
1
@_JenAllen
Jennifer Allen
13 days
Using a novel methodology combining crowdsourcing and NLP, we generalized our exp results to predict causal effects on vax intent for all 13K highly viewed headlines from FB. Crowds effectively identified the most persuasive headlines, and NLP scaled the crowdsourcing results.
Tweet media one
1
0
1