Artificial Intelligence has enormous potential to tackle some of our toughest challenges.
But we must address its risks.
That's why last year, we proposed an AI Bill of Rights to ensure that important protections for the American people are built into AI systems from the start.
This is a crucial point. It's a design choice. Not an accident. And I want to thank
@alondra
for first articulating this so clearly that it stuck in my brain.
I'm very proud to see the release of the AI Bill of Rights (BoR) today. It all started with a vision articulated by
@AlondraNelson46
. And is the product of so much hard work and collaboration among so many within
@WHOSTP
, and within the government.
1/n
People afraid of AI taking over the world should be required to install a package: preferably something in Julia or tensorflow on a nonstandard architecture.
I was talking to a Ph.D student recently and they asked me (or at least I understood them to be asking) whether, in the light of the Google fiascos and what we're seeing right now with big tech and AI ethics in general whether there's any point in doing the work that I do. 1/n
At a boarding gate: Delta is asking passengers to show their face to a device that they aren't calling a scanner. I ask "I thought face scanning was optional?" They say "it isn't a scanner". I say "but it's scanning my face"? They say "well you can call it that if you insist"
Of all the AI conversations I’ve been having recently, this was one of the most thought-provoking. For
@ftweekend
, I had lunch with Ted Chiang, whose stories I encourage you all to read to expand your idea of what is possible.
My new law is what I'll call "asymptotic Marxism": any discussion about AI and bias will eventually with probability one end in advocating the end of capitalism
Everyone talks about how hard it is to do interdisciplinary work - but no one talks about the HARDEST part: having to write papers in ..... Word.
I feel so unclean...
People getting their
@FAccTConference
reviews right now, you might want to consider that among other things
@timnitGebru
is a founder of the conference. When you publish at
@FAccTConference
you are benefiting from her work.
Please welcome Suresh Venkatasubramanian (
@geomblog
), who joins
@BrownCSDept
and
@Brown_DSI
as professor this fall! Read a full interview with Suresh at Brown CS News:
"We shouldn't regulate
#AI
until we see some meaningful harm that is actually happening [...] There has to be at least a little bit of harm, so that we see what is the real problem" -- Microsoft Chief Economist to
#WEF
#GrowthSummit23
The pandemic has been great for job retraining:
Mar-May: epidemiologist
May-July: critical race theorist
July-Sep: child education expert
Oct-Dec: election law expert
Jan: constitutional law expert
Feb: power grid/climate change expert.
I just designed this new tool to punch people in the face. I'm worried that if we release this to the public people will get punched in the face, so we are looking to do a careful and thoughtful release of the punch-people-in-face tool.
#techpolicytoday
What's going on here is very troubling. I have no doubt that Google will find some plausible sounding pretext. But the fact remains that they are quashing dissent in a topic where dissent is the first order principle, and that shows core unwillingness to think about ethics.
Since apparently we are now back to the "algorithms aren't biased, society is" set of arguments, I thought I'd re-up an old post from 2016 that lays out the rules of this particular dance:
For all those grad students driven to a homicidal fury by throwaway reviewer comments trashing their work, who think that one day they'll graduate, get an academic job, get tenured, and promoted, and will achieve a zen like bliss, .....
I HAVE NEWS FOR YOU.
Race is not a scientifically valid concept. It’s a cultural conception that’s made its way into science. - This seems like a must-read for data scientists.
And so it ends. As of fifteen minutes ago, my time at
@WHOSTP
, working with the amazing Science and Society term, and under the august leadership of
@AlondraNelson46
, is done. It has been an intense and life-changing journey and I'm honored and grateful for the opportunity. 1/6
People of
@FAccTConference
,
@AIESConf
,
@ACMEAAMO
and other academic venues researching responsible tech: Hope you're all tracking the White House announcement today on safe, secure, and trustworthy AI! Your research has helped get to this point.
This probably dates me hopelessly. While I appreciate the excitement and innovation over virtual conferences, I really find it difficult to concentrate at virtual events. There's something about a change of scenery that helps me get into "conference mode".
Big announcement from the White House today on what companies need to do to ensure responsible AI development.
Let's unpack the details. And there will be grades! (this is after all the end of semester). 🧵
Today is a very important day for AI governance in the US. The
@OMBPress
released their instructions for all Federal agencies on how to protect our rights when considering the use of AI, following up on Oct 2023 EO on safe, secure, and trustworthy AI. 1/n
CS people. We need to tattoo this on our screens. Ever since I've started looking at algorithmic fairness all I've been doing is screaming about the epistemic sloppiness of ML. and read the entire thread.
I can't believe I'm having to teach my kids how to use a fountain pen and having to explain that this was the default mode for all writing when I was in school. Moreover, that we waited for the day when we were allowed to use fountain pens.
So if I'm moving into a house where the previous occupants were lax about covid protocols and got repeatedly infected, do I need to fumigate it and how long will that take? My move in date is Jan 20 and the house has 132 rooms and 35 bathrooms.
Asking for a friend.
COVID and India: a thread.
Last April, when numbers first started spiking across the world, my close college friends and I started doing a weekly zoom call to check in and connect up. Most of our initial calls were the usual armchair epidemiology and R_0 rantings 1/n
And we should continue to expect a lot more expert comms FUD. But to get to this point is something. And I think one of the shining points of
@_KarenHao
's article is how she cleanly exposes the rhetorical games that are being played without overly simplifying. 10/n=10
In CS, we publish and then give talks based on published work. In many other disciplines, the talk is the first attempt to flesh out ideas that eventually end up in a paper. I’m finding myself doing the second more and more.
How do people manage to keep track of ML papers? This is not a request for support in my current state of bewilderment - I'm genuinely asking what strategies seem to work to read (or "read") what appear to be 100s of papers per day.
Roko's* five stages of AI grief:
1. Denial: "there is no bias in AI systems. Math is not racist."
I feel like we have largely moved on from this stage, thanks to all the reporting we've had over the years.
* the basilisk will always be with you
DeepSense, a company based in San Francisco and New Delhi, uses artificial intelligence to assess job candidates’ personalities based on their social media accounts.
@jasonbellini
finds out how it all works.
#WSJWhatsNow
@henrikstroem
@lizjosullivan
@timnitGebru
Wait. Seriously. Are you trying to troll here? "history is irrelevant for machine learning" since when? Do you know the history of statistics and its connection to eugenics? And "math is an exact science" - yes if you're not applying math MODELING to the real word.
My son, concerned: "My essay needs to be 1500 characters and it's 1900 characters right now!"
@picturewing
and I (both academics): "Stand back, young Padawan".
I'm on the board for
@datasociety
- one of THE BEST organizations thinking through the implications of our collisions between tech and society. And I wanted to give a shout out the Algorithmic Impact Methods Lab that they've just started (). 1/n
I'm really enjoying reading this paper. Anthropomorphism is an EXPLICIT DESIGN CHOICE, not something that just happens by accident. As the example illustrates and as the paper discusses, it's entirely possible to design systems that avoid anthropomorphizing.
Absolutely loving this zinger from
@ZeerakTalat
and friends about the dangers of anthropomorphizing AI systems -- it does an amazing job at explaining all of the risks that come with it! A must read 🤗
A simple question then. Why didn't they? Surely in 2020 one can't get away with saying "oops bad data". Not to mention all the other ways that
@timnitGebru
outlines that biasbcam creep in. I really don't think it's credible to say at this point that "it's just bad data"
ML systems are biased when data is biased.
This face upsampling system makes everyone look white because the network was pretrained on FlickFaceHQ, which mainly contains white people pics.
Train the *exact* same system on a dataset from Senegal, and everyone will look African.
The big shift that I am beginning to see now with Karen's article is to "Hey we have amazing researchers doing stuff on AI Bias. What? they're telling us we are bad. FIRE THEM" or "Sorry, anything actually relevant is outside your purview" 8/n
For all the people who claim that the H1B gets misused to exclude domestic workers, it might be helpful to know that at most US universities, close to 80% of graduate admissions in CS are foreign students. 1/6
I gave a brief answer at the time, to the effect of "the fact that we're seeing pushback means that our efforts are working", which felt a little unsatisfactory to me. But with
@_KarenHao
's brilliant new article on Facebook I feel like there's a more concrete shift. 2/n
AI policy discourse is a weird mix of mealy-mouthed blandness (coming for you, NAIAC) and apocalyptic terror. Given that, the
@FTC
's straight up no-BS DGAF blog posts are a refreshing breeze. Latest exhibit: the "Luring Test" - brilliant!
One component of transparency in ML oversight is: "what data was the model trained on". It seems like this would be impossible to answer for LLMs, (and might very well be), but the fascinating thread below shows why the answer to this question is important: 1/n
Many viral ChatGPT examples involve testing on the training set. In the quoted tweet it writes a decent review, but it's been trained on ~5,000 papers that cite/discuss this paper. I asked it for a review *without* giving it the text and it hit most of the points in the original.
It's alarming that NeurIPS papers are being rejected based on "ethics reviews". How do we guard against ideological biases in such reviews? Since when are scientific conferences in the business of policing the perceived ethics of technical papers?
"back in the day", when I first was talking to journalists about AI bias, I remember people saying, "well yeah, but this is all hypothetical. give me a real example where something happened and we'll talk". Similarly, most tech companies were like "AI Bias? Who dat?" 3/n
The ProPublica article on COMPAS changed the public discussion in ways it's hard to explain. But at the very least, it shifted the discourse to "AI Bias? Yeah society sucks, but it's not our problem, we're just tech people". Not a big shift, but wait....4/n.
Instead of conducting multiple slack and twitter conversations, I decided to write a post on the new stochastic parrots paper by
@emilymbender
,
@timnitGebru
,
@mcmillan_majora
and an Aether wielding script writing elemental entity.
Churchill is finally getting a much deserved reckoning, and this article explains why people in India viewed him in the same category as Hitler. 2-3 million people died in the Bengal famine which he made much worse.
This article is a short but brilliant articulation of so many of the issues to consider with algorithmic governance. And without a single LLM in sight! Let's go through it.
Italy has a problem with teachers. An
#algorithm
was supposed to save time by allocating teachers on short-term contracts to schools automatically. Failures in the code and design severely disrupted teachers’ lives, reports our fellow
@PierluigiBizz
. ⤵️
New article by
@carolineha_
on the methodological problems with predpol and predictive policing, with comments from
@KLdivergence
and me. This is "algorithms are also bad" complement to new
@AINowInstitute
report "data is bad" for predictive policing.
This is insane. It places foreign students who are essentially helpless in a middle of huge argument over online vs in-person teaching. No university will be able to go fully online (which is arguably the medically responsible thing to do) if it shafts their students like this.
ICE is telling international students on F-1 and M-1 visas that if their school is doing online-only courses they must leave the country or transfer to a place with in-person instruction—or they'll be deemed in the US illegally and subject to deportation.
Happy to note that my official (and verified!) OSTP account is now active at
@SureshVenkat46
. Please follow that for OSTP related information and representations. You can expect my usual mix of snark, horror and AI+society ruminations to continue here.
"my code should do what I program it to do" and "I tried to break the code and it broke" is how we do security. Not incompatible at all. The difference is that we don't deploy untested code and be surprised when it breaks...
Ht
@Abebab
and
@mmitchell_ai
“AI needs to do whatever i ask” and “i asked the AI to be sexist and it was, look how awful!” are incompatible positions.
somewhat surprised by the number of people who hold both.
Upshot: I asked whether I need to use the not-scanner. They said yes, but you're going to need to have your passport and boarding card ready and open.
I'm clearly not getting the good snacks on this flight.
I can't believe we are 7 months into a pandemic and Zoom hasn't yet figured out how to add audio filters like canned applause. Would make keynotes so much better. You could also have "accidental cell phone ringing" or "unmuted youtube video" or...
There's no doubt there's a series of bobs and weaves here to avoid doing what is truly painful. But I can't look at this and not think - whatever the community of people thinking about this issue is doing, it's actually working. There's a long way to go no doubt. 9/n
People I write recommendation letters for: I am excited for your future plans and support them. And I'd love to know where you end up. But I don't want to badger you with questions so if you could drop me a note when your plans are set I'd greatly appreciate it :).
@deliprao
@timnitGebru
Exactly. Timnit's work has not only survived 'peer review', it has survived the test of time and actual citation. I'm writing a paper right now that cites datasheets, model cards and gender shades all in one section. Maybe some of these 'Blind'ers should actually try writing.
"There is no AI exemption to the laws on the books". This is a strong statement from the chairs of federal enforcement agencies. Kudos to
@FTC
,
@USEEOC
,
@CFPB
, and
@CivilRights
.
The weird thing about failure. There's an assumption that "once you get tenure/success/fame/seniority" you will able to shed your insecurities/imposter syndrome/other ways in which your internal description of yourself is different from what you present as
#AcademicChatter
1/
Apparently my manager’s manager sent an email my direct reports saying she accepted my resignation. I hadn’t resigned—I had asked for simple conditions first and said I would respond when I’m back from vacation. But I guess she decided for me :) that’s the lawyer speak.