stanfordio Profile Banner
Stanford Internet Observatory Profile
Stanford Internet Observatory

@stanfordio

Followers
14K
Following
123
Media
52
Statuses
529

The Stanford Internet Observatory is a cross-disciplinary program studying the abuse of the Internet and providing thoughtful policy and technical solutions.

Stanford, CA
Joined February 2020
Don't wanna be here? Send us removal request.
@stanfordio
Stanford Internet Observatory
2 years
RT @issielapowsky: NEW: Researchers at @thorn and @stanfordio are out with a new paper on the rise in AI-generated child sexual abuse mater….
0
31
0
@stanfordio
Stanford Internet Observatory
2 years
RT @StanfordCyber: 🎟️Tickets are on sale for @stanfordio Trust and Safety Research Conference, to be held September 28-29, 2023. Lock in ea….
0
2
0
@stanfordio
Stanford Internet Observatory
2 years
RT @StanfordCyber: .@stanfordio's recent investigation of platforms identified large networks of accounts, purportedly operated by minors,….
0
9
0
@stanfordio
Stanford Internet Observatory
2 years
This work requires sustained, meaningful investments in Trust & Safety teams given the significant harms. We'll continue to work with platforms to improve detection & prevention of risks to child safety, both directly and in concert w/ @missingkids & @tech_coalition. 7/7.
1
1
15
@stanfordio
Stanford Internet Observatory
2 years
This work has started to have real-world impact. NCMEC and law enforcement are investigating probable content buyer accounts. @meta has fixed problems with user reporting and triage, as well as taking steps to limit discoverability. @twitter has improved its CSAM detection. 6/7.
2
6
17
@stanfordio
Stanford Internet Observatory
2 years
While different from common conceptions of CSAM, it is our position that no child can meaningfully consent to the implications of selling explicit content: stalking, extortion, uncontrolled distribution, content being traded for other CSAM, or worse. 5/7.
1
2
13
@stanfordio
Stanford Internet Observatory
2 years
Most work on child exploitation has focused on protecting minors from predators online. Industry Trust and Safety teams and tools are not primarily geared to detect and prevent children from creating and selling their own CSAM for financial gain. 4/7.
1
2
14
@stanfordio
Stanford Internet Observatory
2 years
We found the scope of commercial SG-CSAM is extensive. Sellers are often familiar with audience growth and ban evasion techniques. Instagram is by far the most popular platform, but this is a widespread issue. 3/7.
3
10
27
@stanfordio
Stanford Internet Observatory
2 years
We carefully gathered public data to study this network. Sensitive media content was processed automatically, reported to @missingkids if tools detected it as CSAM, and discarded. 2/7.
1
3
15
@stanfordio
Stanford Internet Observatory
2 years
New SIO report out today on self-generated child sexual abuse material (SG-CSAM). A tip from @WSJ led to an investigation revealing a large network of what appeared to be underage users producing, marketing and selling explicit content. 1/7.
7
41
60
@stanfordio
Stanford Internet Observatory
2 years
Twitter is by no means the only platform dealing with CSAM, nor are they the primary focus of our upcoming report. Regardless, we're glad to have contributed to improving child safety on Twitter, and thank them for their help in remediating this issue. 6/6.
1
16
85
@stanfordio
Stanford Internet Observatory
2 years
Having no remaining Trust and Safety contacts at Twitter, we approached a third-party intermediary to arrange a briefing. Twitter was informed of the problem, and the issue appears to have been resolved as of May 20. 5/6.
2
14
71
@stanfordio
Stanford Internet Observatory
2 years
Our tooling automatically reports any instance of known CSAM to NCMEC without our team viewing it. The investigation discovered problems with Twitter's CSAM detection mechanisms and we reported this issue to NCMEC in April, but the problem continued. 4/6.
1
10
57
@stanfordio
Stanford Internet Observatory
2 years
For the part of our investigation that involved Twitter, we gathered tweet metadata via Twitter's API. As a precaution, we used an ingest pipeline that did not store media, but sent media URLs to PhotoDNA, Microsoft's service for detecting known CSAM. 3/6.
1
9
58
@stanfordio
Stanford Internet Observatory
2 years
This discovery, that Twitter’s systems for stopping the posting of known child sexual abuse material (CSAM) had failed, occurred in the context of a larger project that we will release later this week alongside the Wall Street Journal. 2/6.
1
18
71
@stanfordio
Stanford Internet Observatory
2 years
In the course of conducting a large investigation into online child exploitation, the Stanford Internet Observatory discovered serious failings with the child protection systems at Twitter. 1/6.
@AlexaCorse
Alexa Corse
2 years
New: Twitter failed to prevent dozens of known images of child sexual abuse from being posted on its platform in recent months, according to researchers who said Twitter has since appeared to resolve the issue via @WSJ.
20
153
323
@stanfordio
Stanford Internet Observatory
2 years
New work from our postdoc @RERobertson out in @Nature today on how people interact with news on Google Search 🧵👇.
@RERobertson
Ronald E Robertson
2 years
New paper out today in Nature on how people interact with partisan and unreliable news on Google Search. With @_Jon_Green,@damianjruck, @Ognyanova, @bowlinearl, and @davidlazer.
Tweet media one
2
6
7
@stanfordio
Stanford Internet Observatory
2 years
RT @goodformedia: How can we re-design social media to better support #youthmentalhealth? Youth shared their ideas directly with industry r….
0
3
0
@stanfordio
Stanford Internet Observatory
2 years
RT @StanfordCyber: AI is not very good at interpreting nuance and context, says @noUpside of @stanfordio. "It’s not possible for it to enti….
0
7
0
@stanfordio
Stanford Internet Observatory
2 years
RT @journalsafetech: Our Spring 2023 issue is live! Check it out:
Tweet media one
0
3
0