My latest for
@Nature
: AI's environmental costs are soaring. The new energy-hungry models for video, text, and image could create an energy crisis - and impact drinking water reserves. We urgently need action from industry, researchers, and legislators.
• This AI does not "read children's emotions"
• There are not seven universal emotions
• Facial muscle data does not correlate reliably with a person's inner state
Covid is being used as a pretext to install this in remote classes, then keep it.
ChatGPT strikes again. A journalist contacted me to research her profile on
@lexfridman
. ChatGPT informed her that
@_KarenHao
and I were his top critics. It cited articles we'd written about him, gave links, and summaries. Only problem: it's all false. Here's what she sent me:
Umm, anyone a little concerned that Bard is saying its training dataset includes... Gmail?
I'm assuming that's flat out wrong, otherwise Google is crossing some serious legal boundaries.
Stunning: new rigorous evidence shows that wellness programs have NO significant impact. Apart from this: they massively increase the surveillance of workers, which we show here.
@iajunwa
@Lawgeek
We saw this coming, and here it is. Endless trapdoors ahead: data inaccuracies, intentional gaming, constant intimate surveillance 24/7, data breaches that will be infinitely worse, &c...
John Hancock, one of the oldest and largest North American life insurers, will stop underwriting traditional life insurance and instead sell only interactive policies that track fitness and health data through wearable devices
Time for a collective eyeroll from all of us who actually research this issue.
@AOC
was talking about facial recognition systems. They're proven to have bias issues. But this is how it gets reported.
Thrilled to launch a big project today: ANATOMY OF AN AI SYSTEM. It's a large map & long-form essay about Amazon's Echo, and the full stack of capital, labor, and natural resources used in AI. It's a collab with
@TheCreaturesLab
, who is a visual genius ✨
Let's talk about the 10%. What if the AI surgeon was trained on data that oversampled white men (as per many controlled trials)? And it consistently produces worse outcomes for Black people and women? Seems like it matters who "you" are in this hypothetical 🤔
Suppose you have cancer and you have to choose between a black box AI surgeon that cannot explain how it works but has a 90% cure rate and a human surgeon with an 80% cure rate. Do you want the AI surgeon to be illegal?
So Facebook is deleting one billion facial recognition scans, but it's keeping DeepFace, the model that is trained on all those faces. Note that "the company has also not ruled out incorporating facial recognition into future products." Very meta. 👀
Breaking News: Facebook plans to delete face scan data from over 1 billion users, shutting down a facial recognition system that became a privacy headache.
This is VERY bad idea. Why should we be forced to submit to facial recognition just to buy groceries? So brands can test if their ad campaigns work? Umm no thanks. This invades your physical privacy and profiles you, for bad data.
This tweet was 5 years in the making:
#AtlasofAI
is out TODAY!
It’s a book on the politics and planetary costs of artificial intelligence as an extractive industry— consuming natural resources, labor, and vast quantities of data. 👉
@yalepress
(1/6)
2020 has been a strange and hard year, and like many of us, I'm caring for family who are seriously ill. After 5 years of building & co-leading AI Now, I’m stepping down to make room for my next chapter. I’m so proud of what we accomplished together and hopeful for the future. ♥️
Guess what? Study shows that self-driving cars are better at detecting pedestrians with lighter skin tones.
Translation: Pedestrian deaths by self-driving cars are already here - but they're not evenly distributed.
My NIPS talk 'The Trouble with Bias' is now up on the 'tubes. About the politics of classification, and limits of focusing on allocative harms not representational harms. The desire for a quick 'technical fix' to bias could actually do more damage.
Wow - the human/AI interface is about to get weirder.
US Copyright Office just ruled that works "autonomously created" by algorithms cannot be copyrighted. This is the work they rejected:
These are *actual* notes from Peter Thiel's class at Stanford on how to run startups, featuring VPs from Palantir and PayPal. And it gets worse from here.
It happened: Facebook just went off the deep end in Australia. They are blocking *all* news content to Australians, and *no* Australian media can post news.
This is what showdowns between states and platforms look like. It's deplatforming at scale.
I have a piece in
@nature
today on the urgent need to regulate emotion recognition tech. During the pandemic, this tech has been pushed further into schools and workplaces. We should reject the phrenological impulse, where unverified systems are used to interpret inner states.
OMG - my book has arrived!
These galley copies of Atlas of AI had to navigate oceans and international logistics, but here they are. It’s quite a thrill to see it in physical form at long last. 📚 🌎
Our new paper is out today: “DIRTY DATA, BAD PREDICTIONS.” We show serious problems with predictive policing being used where there is evidence of illegal and discriminatory police practices. By Rashida Richardson,
@Lawgeek
& me:
I'm so excited that my new book is out in April:
ATLAS OF AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. 📕 🔜
You can preorder from any local bookshop, eg:
Featuring a beautiful cover & illustrations by
@TheCreaturesLab
👇
Today we officially launch the
@AINowInstitute
at NYU! Grateful to be part of an incredible community of scholars in multiple disciplines working on the social and political implications of AI, ML and algorithmic accountability.
We can finally share a little secret: has just been acquired by MoMA. Wild!
Huge thanks to the whole curatorial team at
@MuseumModernArt
for their support of our project, and to
@TheCreaturesLab
for this joyous collab ✨♥️🗺️
I just published this piece in
@NatureNews
today. Facial recognition is being rolled out in cities worldwide with few safeguards. It's used by ICE, CBP, and across public space. Bad when it fails, bad when it works. It's time for a moratorium.
What if 'tech ethics' is a smokescreen unless it contends with the bigger issues of concentrated power, governance models, due process, and public accountability? 🤔
Meanwhile, Amazon's latest patent is for Alexa to detect when people are sick, bored or unhappy. "Alexa would listen out for if users are crying and then class them as experiencing an “emotional abnormality.” 🙃
📣 The AI Now 2018 Report is now live! 📣
Our biggest ever report tackles the issues of AI and accountability, after a hell of a year for the tech sector. Read the report, see our 10 recommendations
Wait WHAT? The US says "an algorithm" identified Eyal Weizman as a security threat and he can't enter the country. Yes, the same guy who founded Forensic Architecture at Goldsmiths, a respected researcher who visits frequently. Not good.
This story exposes the myth of control. Spotify & Netflix could read *private* FB messages. Microsoft, Amazon & Yahoo all had access far beyond what ANY user agreed to. The total lack of respect for user wishes is the infinitely repeating scandal of 2018
@GoogleWorkspace
Thanks for this. So can I confirm that this was a Bard hallucination, and that there's no Gmail data included whatsoever in the training process?
Truly horrifying story.
Right now, Uyghurs are being used as lab subjects for emotion AI, under coercion, strapped into restraint chairs, and then tagged as "nervous" or "anxious" which is taken as a marker of guilt. via
@hare_brain
Hidden amongst the zingers in the leaked Google memo there's this: data quality is what is making the difference now. This is an important shift away from internet-scale garbage dump datasets that have been running the show for years.
Finally, a paper on why "AI for good" is an empty phrase without a theory of change. What is "good" is never articulated in the rush to tech solutions, while alternative reforms are overlooked. Read
@benzevgreen
's piece before it blows up at
@NeurIPSConf
Want to see how an AI trained on ImageNet will classify you? Try ImageNet Roulette, based on ImageNet's Person classes. It's part of the 'Training Humans' exhibition by
@trevorpaglen
& me - on the history & politics of training sets. Full project out soon
In the 12hrs since Bard told me it was trained on Gmail data:
-Google replies (says it's not)
-Elon Musk replies (lol)
-Google adds a 'community note' that this is a Bard error and it's not trained on Gmail
-Some ace memes
What should happen next: Real talk about training data 🧵
Just wow. So this happened. Thank you
@DesignMuseum
and all the jury for giving us the Design of the Year award.
@TheCreaturesLab
and I are kinda blown away 😮 💥
[Opens WSJ]
Today, a wealthy, white, ex-hedge fund guy said facial recognition is fine. He quotes cops, cites Amazon's PR that it has no racial biases, and...
[rubs eyes]
...opens with noted pioneer of eugenics and 'race betterment', Francis Galton.
And here it is - the leaked client list of Clearview. Remember how they said it was “strictly law enforcement only”? Not so much. Clients include Walmart and Macy’s as well as ICE and CBP.
Clearview AI has been used by more than 2,200 entities. From governments to the private sector. From ICE to Interpol. From Australia to the UAE.
Collectively, there have been nearly 500,000 searches.
New paper shows racial bias in automatic hate speech detection. So AI models trained to flag & remove offensive tweets risk suppressing black voices.
"African Americans are up to two times more likely to be labelled as offensive compared to others"
On the many problems with emotion "detection" in AI:
"A growing crowd of researchers argues that the variation is so extensive that it stretches the gold-standard idea to the breaking point. Their views are backed up by a vast literature review."
LITIGATING ALGORITHMS is out today: our report on the current court cases about algorithmic decision making. We convened the lawyers leading the cases on criminal risk assessment, Medicaid & teacher evals. With
@EFF
and
@RaceNYU
, this is what we found out:
In the last two days, Google and
@FinancialTimes
editorial board have called for a temporary moratorium on facial recognition - following in the footsteps of many researchers and civil society orgs.
US job applicants are getting a single number that determines if they get a job. That number comes from a bollocks AI system that claims to rate their social media activity. And people think 'social credit scores' are just in China 🤔
Big news: Ohio's attorney general is suspending access to facial recognition databases for police officers following the news that federal agencies like ICE and the FBI are mining state databases without people’s consent
Today
@trevorpaglen
& I open MAKING FACES: an installation and event about the 150 year history of facial recognition. It's an intervention staged in Paris that focuses on politics of the face as a terrain of identification, power and control.
OMG - IT ARRIVED 💥💥💥
Just 5 days before it goes on sale in the US, physical copies of my book are here! Exciting to see it and
@TheCreaturesLab
’s beautiful illustrations come to life.
#atlasofAI
Excited to launch this new investigation w
@trevorpaglen
- to be read alongside ImageNet Roulette. It's about how training data works, the costs of classification, and why "removing bias" or "increasing diversity of data" isn't enough 👁️
The UN has made a deal with Palantir to give them highly sensitive data about aid recipients in the World Food Program - part of their “very aggressive digital transformation journey.” World reacts in horror. 😱
🚨 NEW RESEARCH ALERT: The diversity crisis in AI has hit a moment of reckoning: the call is coming from inside the house. Study led by
@sarahbmyers
shows lack of workforce diversity and bias in AI systems are connected. Read here:
#AIdiversitycrisis
So I gave a talk at the
@royalsociety
last week. Seemed like the right time to discuss the political landscape we're in, and why the bias debate in AI is too narrow. Here's the video: "Just an Engineer: On the Politics of AI"
Ed tech company experiments on 9000 kids without anyone's consent or knowledge to see if they test differently when 'social-psychological' messaging is secretly inserted? HARD NO.
There is a real problem here. Scientists and researchers like me have no way to know what Bard, GPT4, or Sydney are trained on. Companies refuse to say. This matters, because training data is part of the core foundation on which models are built. Science relies on transparency.
Now the dust has settled on Google's AI principles, it's time to ask about governance. How are they implemented? Who decides? There's no mention of process, or people, or how they'll evaluate if a tool is 'beneficial'. Are they... autonomous ethics?
SCOOP: With secret access to NYPD CCTV
@IBM
created software which tags people based on their skin tone + hair/clothing color. IBM gave NYPD access, then pitched them on a new AI product which identifies people on camera as "Black," "White," and "Asian":
"This is the story of how affect recognition came to be part of the AI industry, and the problems that presents."
An excerpt from
#AtlasofAI
is out today in
@TheAtlantic
drawn from my chapter on emotion AI. Thanks to
@AdrienneLaF
for editing 🙏📚✍
Whoa, ImageNet Roulette went... nuts. The servers are barely standing. Good to see this simple interface generate an international critical discussion about the race & gender politics of classification in AI, and how training data can harm. More here:
Today we're sharing our Critical Dataset Studies reading list. The Knowing Machines team uses it to reflect on the growing literature on ML datasets across disciplines. Datasets powerfully construct model worldviews, so they're important to study. (1/4)
Today is a fun anniversary in my music life. 20 years ago we released "2020" - an electronic album exploring ideas of gender, machines, and strange futures. ABC just published this retrospective on B(if)tek, 2020, and what we do now 👩🎤🎹
So many important concerns have been raised about the Google AI Board, particularly the inclusion of the Heritage Foundation and a drone manufacturer. When we wrote 'Why Ethics is Not Enough' last year, this is what we meant
The current rush to "bossware" to surveil and rank workers during Covid harkens back to Ford's factories, Bentham's inspection houses, and Taylor's micromanagement of bodies. Now it's sprinkled with "AI and data science" to be more invasive, granular, and controlling.
🎧NEW PODCAST! 🎧 Want to understand AI with deeper context? We've launched a pod series on how AI is trained to interpret the world. Each ep has a research theme: from the social, legal, cultural, environmental & political impacts of generative AI. 👇📻
It's 2017, and researchers are still using Playboy's Lena centerfold as a test image. Given the gender issues in this field, maybe it's time to move on guys? 🤔
Big news: LAPD will end the use of the broken predictive policing system known as PredPol, citing budget concerns under COVID-19. This is thanks in large part to community groups like
@stoplapdspying
pushing back against its use.
The
@EU_Commission
's final proposal for an Artificial Intelligence Act is here. Some examples:
AI systems are prohibited if they violate human rights, do general social scoring for authorities, use live remote biometrics in public for policing. 👀
Wow.
@TheCreaturesLab
& I just got news that our Anatomy of an AI System has been acquired for the permanent collection of the V&A museum. The map, the newspaper, and the code are now setting up a new home. Big thanks to
@V_and_A
curatorial team ✨🗺️💜
Deep support to all the women at Microsoft who were brave enough to come forward. Cultures of harrassment, exclusion and unfair compensation are unacceptable, and it's time to make change across the whole sector. 🙌
Today women from Microsoft asked CEO Satya Nadella for answers after a massive email thread where female employees shared their experiences with discrimination and harassment over the past couple weeks
Whoa. Adobe is offering *full* legal indemnification for copyright lawsuits over generated images that enterprise users produce in Firefly. Their model is trained on licensed & out of copyright images, which others don't do - so it's a big throw-down.
Honored to be on
@TIME
's list with many friends and colleagues. Glad to see it has included the work of those addressing AI's social, political & ecological effects.
#TIME100AI
I just helped a family member evacuate. We’re on a “watch and act” notice. The skies across a vast section of Australia are dark yellow and thick with smoke.
There’s never been a more urgent time to hold politicians to account. The new decade demands real action.