@emilymbender@dair-community.social on Mastodon Profile Banner
@emilymbender@dair-community.social on Mastodon Profile
@[email protected] on Mastodon

@emilymbender

Followers
58,224
Following
2,044
Media
1,136
Statuses
30,287

Prof, Linguistics, UW // Faculty Director, CLMS // she/her // @emilymbender @dair -community.social & bsky // rep by @ianbonaparte

Joined July 2010
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@emilymbender
@[email protected] on Mastodon
11 months
Mystery AI Hype Theater is now available in podcast form! @alexhanna and I started this project as a one-off, trying out a new way of responding to and deflating AI hype... and then surprised ourselves by turning it into a series.
10
77
273
With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety" nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (+ some contacts from old hands who know how to handle ultra-rich man-children with god complexes). 🧵1/
52
747
3K
For those playing along at home, here's a "AI is sentient!" argument bingo card.
Tweet media one
90
583
3K
I'm happy to announce that our paper (with @timnitgebru & others) "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" has been accepted to #FAccT2021 >>
26
249
2K
As a quick reminder: AI doomerism is also #AIhype . The idea that synthetic text extruding machines are harbingers of AGI that is on the verge of combusting into consciousness and then turning on humanity is unscientific nonsense. 2/
26
482
2K
Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripping with #Aihype . Here's a quick rundown. >>
40
588
2K
I refuse to be delegated to the "skeptics box" in someone else's framing of a debate. Here is my response to @stevenbjohnson 's NYT Magazine article about LLMs and OpenAI. On NYT Magazine on AI: Resist the Urge to be Impressed
38
468
1K
If we ask "can AI do that?" we're asking the wrong question. A better question is: "Is this an appropriate use of automation?" Here the answer was obviously no, and that was clear ahead of time:
18
384
1K
There's a certain kind of techbro who thinks it's a knock-down argument to say "Well, you haven't built anything". As if the only people whose expertise counts are those close to the machine. I'm reminded (again) of @timnitGebru 's wise comments on "the hierarchy of knowledge".>>
74
196
1K
Camera ready complete & available! On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 To appear in #FAccT2021 w/ @timnitGebru @mcmillan_majora and Shmargaret Shmitchell cc: @mmitchell_ai
34
349
1K
Dear Computer Scientists, "Natural Language" is *not* a synonym for "English". That is all. -Emily
14
243
1K
Don't make me tap the sign (1/2)
Tweet media one
15
158
1K
This is drastic and might well be leading to physical harm. Yet another example of why answer boxes/featured snippets/etc break the connection of words to their context --- a connection which is critical for human understanding. #ethNLP #NLProc
@soft
soft
3 years
The Google search summary vs the actual page
Tweet media one
Tweet media two
142
8K
55K
13
304
940
Facebook (sorry: Meta) AI: Check out our "AI" that lets you access all of humanity's knowledge. Also Facebook AI: Be careful though, it just makes shit up. This isn't even "they were so busy asking if they could"—but rather they failed to spend 5 minutes asking if they could. >>
Tweet media one
Tweet media two
59
248
932
Great new profile of Dr. @timnitGebru in the Guardian. “I’m not worried about machines taking over the world; I’m worried about groupthink, insularity and arrogance in the AI community.” Was how she put it all the way back in 2016.
10
290
929
Tweet media one
17
105
923
@nitashatiku As I am quoted in the piece: “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them” >>
15
151
876
Imagine if the CEO of BP bragged that one in one thousand molecules flowing into the Gulf of Mexico came from the Deep Horizon oil rig. Pollution of the information ecosystem is not something to be proud of.
@sama
Sam Altman
3 months
openai now generates about 100 billion words per day. all people on earth generate about 100 trillion words per day.
1K
1K
15K
16
210
874
I not infrequently see an argument that goes: "Making ethical NLP (or "AI") systems is too hard because humans haven't agreed on what is ethical/moral/right" This always feels like a cop-out to me, and I think I've put my finger on why: >>
29
254
858
DO NOT USE SYNTHETIC TEXT EXTRUDING MACHINES IN ANY SITUATION WHERE THE CONTENT MATTERS. That absolutely includes health care. The idea that this is beneficial for people who we fail to provide adequate healthcare to is offensive. Repulsive even before you see who's behind it
@SashaMTL
Sasha Luccioni, PhD 🦋🌎✨🤗
9 months
LLMs shouldn't be used to give medical advice. Who will be held accountable when things inevitably go sideways? Also, this techno-saviorism crap is absolute BS -- helping "economically disadvantaged" people with AI is a myth.
Tweet media one
224
182
938
32
280
843
Some interesting 🙃 details from the underlying Nature article: 1. Data was logs maintained by the cities in question (so data "collected" via reports to police/policing activity). 2. The only info for each incident they're using is location, time & type of crime. >>
4
108
760
To co-sign, retweet:
Tweet media one
26
449
781
@emilymbender
@[email protected] on Mastodon
11 months
TIL the @APA has guidelines on "how to cite ChatGPT". WTH APA? They do suggest asking it for sources (?!) and then checking & citing those. How about ... not treating text extruded by a synthetic media machine as suitable for research?
38
164
768
In summary, whenever someone is trying to sell predictive policing, always ask: 1. Why are we trying to predict this? (Answer seems to be so police can "prevent crime", but why are we looking to policy to prevent crime, rather than targeting underlying inequities?) >>
5
95
745
I find this reporting infuriating, so I'm going to use it to create a mini-lesson in detecting #AIhype . If you're interested in following this lesson, please read the article, making note of what you think sounds exciting and what makes you skeptical.
11
261
752
tl;dr blog post by new VP of AI at Halodi says the quiet parts out loud: "AI" industry is all about surveillance capitalism, sees gov't or even self- regulation as needless hurdles, and the movers & shakers are uninterested in building things that work. A thread:
15
200
736
@emilymbender
@[email protected] on Mastodon
11 months
I'm so tired of this argument. The "AI doomers" are not natural allies of the folks who have been documenting the real-world harms of so-called AI systems: discrimination, surveillance, pollution of the information ecosystem, data theft, labor exploitation. >>
@erikbryn
Erik Brynjolfsson
11 months
@emilymbender @matteo_wong @TheAtlantic Why is so much effort focused on trying to set natural allies (people who are concerned about harms) against each other? What's wrong with being concerned about more than one kind of harm at a time?
12
2
31
26
191
712
Ugh -- I'm seeing a lot of commentary along the lines of "'stochastic parrot' might have been an okay characterization of previous models, but GPT-4 actually is intelligent." Spoiler alert: It's not. Also, stop being so credulous. >>
26
134
709
3. What about wage theft, securities fraud, environmental crimes, etc etc? See this "risk zones" map:
6
96
675
"There's no way a non-industry person can understand" On the contrary --- the folks in industry have shown themselves incapable of understanding (or caring about) the impacts of people of their tech. Regulation should protect rights and should be made by policymakers.
@MeetThePress
Meet the Press
1 year
WATCH: Former Google CEO @ericschmidt tells #MTP Reports the companies developing AI should be the ones to establish industry guardrails — not policy makers. “There’s no way a non-industry person can understand what’s possible.”
206
97
479
22
202
686
Hey #linguists let's make an #AcademicValentines thread. Here's a start (mine from 2018): Roses are red Violets are blue Language variation is natural Your speech is a dialect too
90
180
683
ML papers to English glossary: "ground truth" = something a human said
15
70
678
Ready for discussions of #ethNLP and ethics review at NLP/ML etc conferences? Don't forget your bingo card! (With @KarnFort1 on the TGV from Perpignan to Paris).
Tweet media one
15
187
680
Just today, I was asked how I felt watching people fall for AI hype. Exasperated. Exasperated is how I feel. Can we just NOT?
@rachelmetz
Rachel Metz
23 days
i asked SARAH, the World Health Organization's new AI chatbot, for medical help near me, and it provided an entirely fabricated list of clinics/hospitals in SF. fake addresses, fake phone numbers. check out @jessicanix_ 's take on SARAH here: via @business
54
964
4K
15
155
689
When your business model relies on theft and you don't like proposed regulations that would expose that theft ... that's a pretty good sign the regulations are on the right track. #OpenAI #AIAct
9
237
679
MSFT lays off its responsible AI team The thing that strikes me most about this story from @ZoeSchiffer and @CaseyNewton is the way in which the MSFT execs describe the urgency to move "AI models into the hands of customers" >>
31
248
677
Journos working in this area need to be on their guard & not take the claims of the AI hypesters (doomer OR booster variety) at face value. It takes effort to reframe, effort that is necessary and important. We all must resist the urge to be impressed: 4/
3
104
674
There is 0 reason to expect that language models will achieve "near-human performance on language and reasoning tasks" except in a world where these tasks are artificially molded to to what language models can do while being misleadingly named after what humans do.
39
121
668
Reading a critique of my paper with @timnitGebru (et al) which claims that if you do science (or scholarship) from the point of view that white supremacy is bad, then you have to make that point of view explicit lest any readers who disagree "mistake it for science". I can't even
18
77
657
Instead: They're about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources). >>
4
126
642
Please don't get distracted by the dazzling "existential risk" hype. If you want to be entertained by science fiction, read a good book or head to the cinema. And then please come back to work and focus on the real world harms and hold companies and governments accountable. /fin
44
87
645
@emilymbender
@[email protected] on Mastodon
11 months
"Dylan Patel, chief analyst at the semiconductor research firm SemiAnalysis, estimated that a single chat with ChatGPT could cost up to 1,000 times as much as a simple Google search."
19
219
640
This article in the Atlantic by Stephen Marche is so full of #AIhype it almost reads like a self-parody. So, for your entertainment/education in spotting #AIhype , I present a brief annotated reading: /1
14
207
633
Pro-tip: If someone claims a model is trained on "the entire internet" they don't actually know what they're talking about and don't understand a) data b) dataset documentation and therefore c) how to reason about the model they're describing. >>
14
87
620
To all those folks asking why the "AI safety" and "AI ethics" crowd can't find common ground --- it's simple: The "AI safety" angle, which takes "AI" as something that is to be "raised" to be "aligned" with actual people is anathema to ethical development of the technology. >>
9
133
612
When the AI bros scream "Look a monster!" to distract everyone from their practices (data theft, profligate energy usage, scaling of biases, pollution of the information ecosystem), we should make like Scooby-Doo and remove their mask.
To co-sign, retweet:
Tweet media one
26
449
781
15
200
583
@emilymbender
@[email protected] on Mastodon
10 months
Was there no one in that newsroom who could stand up and say "stop"?
@CiCiAdams_
CiCi Adams🌸
10 months
What is wrong with y’all?
Tweet media one
264
4K
40K
15
97
593
Folks, I encourage you to not work for @OpenAI for free: Don't do their testing Don't do their PR Don't provide them training data
Oh look, @openAI wants you to test their "AI" systems for free. (Oh, and to sweeten the deal, they'll have you compete to earn GPT-4 access.)
10
46
223
10
226
586
2022 me would like to warn 2019 me that ML bros' arguments about language models "understanding" language were going to mutate into arguments about them being "sentient" and having "internal monologues" etc.
10
58
597
The new "AI is going to kill us all!!1!" letter is a wall of shame—where people are voluntarily adding their own names. We should be concerned by the real harms that corps and the people who make them up are doing in the name of "AI", not abt Skynet.
16
125
578
3. A prediction was counted as "correct" if a crime (by their def) occurred in the (small) area on the day of prediction or one day before or after. >>
3
30
560
Asked to talk to TAs about balancing teaching & research, I talked about balancing teaching & research ... and life. http://t.co/QJanMxRMEi
26
118
541
I wonder if we could put together a AI hype tracker or AI hype incident database + visualization, that could help expose the corporate & other motives behind a lot of this. >>
27
62
548
Hey #NLProc (and AI folks working on language), can we just ... not?
Tweet media one
10
92
544
@emilymbender
@[email protected] on Mastodon
11 months
"I thought ChatGPT was a search engine". It is NOT a search engine. Nor, by the way are the version of it included in Bing or Google's Bard. Language model-driven chatbots are not suitable for information access. >>
@innercitypress
Inner City Press
11 months
Judge Castel: Did you ask Chat GPT what the law was, or only for a case to support you? It wrote a case for you. Do you cite cases without reading them? Schwartz: No. Judge Castel: What caused your departure here? Schwartz: I thought Chat GPT was a search engine
7
103
640
17
119
538
t the same time, it serves to suggest that the software is powerful, even magically so: if the "AI" could take over the world, it must be something amazing. 3/
2
41
536
Just turned down an invitation to do a "pro bono" (their words!) talk to a private conference for VCs and CTOs. I'm all for making academic conferences widely accessible, including keeping costs low, but for an industry gig? No way.
13
26
537
@alexhanna "AI" is not "good at writing"—it's designed to produce plausible sounding synthetic text. Writing is an activity that people to do as we work to refine our ideas and share them with others. LLMs don't have ideas. 14/
Tweet media one
6
97
531
I think the main lesson from this week's AI sentience debate is the urgent need for transparency around so-called AI systems. In a bit more detail:
25
185
521
Linguistics as a field has a lot to contribute to better understanding what large language models can and can't do and yet many don't think to turn to linguists (or don't even really know what linguists do) when trying to evaluate claims about this technology. >>
14
97
523
Can we talk about the phrase "top AI researchers" and how it devalues the expertise of anyone whose primary focus isn't on the internals of the systems themselves?
Tweet media one
23
76
513
Hey @sciencefocus we've been down this road already. No it effing can't. It is beyond irresponsible to produce headlines like this. >>
Tweet media one
7
120
516
2. What happens when police are deployed somewhere with the "information" that a crime is about to occur? >>
4
44
503
Here's a cute example, due to Itamar Turner-Trauring ( @itmarst @hachyderm .io), who observes that Google gave bad results which were written about in the news—which the new GPT-Bing used as reliable answers. Autogenerated trash feeding the next cycle, with one step of indirection.
Tweet media one
10
151
511
You know what's most striking about this graphic? It's not that mentions of people/cities/etc from different continents cluster together in terms of word co-occurrences. It's just how sparse the data from the Global South are.
@wesg52
Wes Gurnee
7 months
Do language models have an internal world model? A sense of time? At multiple spatiotemporal scales? In a new paper with @tegmark we provide evidence that they do by finding a literal map of the world inside the activations of Llama-2!
183
1K
6K
9
99
511
Nothing we currently have, not LaMDA, not GPT-3, not DALL-E, none of it, is actually "an AI" in the sense that that phrase evokes for most people. I'm one of the interviewees in this podcast and I try to make that clear.
@BBC_CurrAff
BBC Current Affairs
2 years
A Google engineer claims an AI chatbot called LaMDA is sentient - though Google says there's no evidence to support this. But what exactly is LaMDA and how does it work? Is AI capable of felt experience? Find out more on The Inquiry podcast.
3
9
23
17
101
510
@sama @OpenAI That is, the very people in charge of building #ChatGPT want to believe SO BADLY that they are gods, creating thinking entities, that they have lost all perspective about what a text synthesis machine actually is. >>
20
94
500
Ever found the discourse around "intelligence" in "A(G)I" squicky or heard folks pointing out the connection w/eugenics & wondered what that was about? History of it all can be found in this excellent talk by @timnitGebru (w/ co-author @xriskology )
10
170
500
OpenAI: We refuse to tell you what's in the training data for GPT-4, for "safety" Also OpenAI: We're throwing 10x $100k at "experiments in democratic process" for determining rules AIs should follow. You don't get to regulate your own business.
11
121
497
So the problem I see with the "FDA for AI" model of regulation is that it posits that AI needs to be regulated *separately* from other things. >>
16
114
501
Really honored to be here alongside @rajiinio @jovialjoy @timnitGebru @mmitchell_ai @Abebab @mer__edith and others like Ted Chiang
@TIME
TIME
8 months
TIME's new cover: The 100 most influential people in AI
Tweet media one
431
1K
3K
23
64
487
I’ve picked up a bunch of new followers in the past few days, and I suspect many of you are here because you’re interested in what I might have to say about Google and Dr. @TimnitGebru . So, here’s what I have to say:
5
96
482
This is a particularly stark example of where the ML/big data approach of just grabbing everything that is accessible ignores various norms, structures, social contracts. Importantly, it's not the only such example.
9
129
472
Don't make me tap the sign (2/2)
Tweet media one
17
83
479
Are there any other fields that simultaneously conceive of themselves as 'solving the world's problems' and also see no need for any due diligence around mitigating potential harmful impacts of what they do/build, or is it just CS?
45
75
474
Also, it's kind of hilarious (lolsob) that OpenAI is burning enormous amounts of energy to take machines designed to perform calculations precisely to make them output text that mimics imprecisely the performance of calculations… & then deciding that *that* is intelligent. 16/
6
87
477
"We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems" w/ @alexhanna in @scientificamer
10
188
468
New @Meta privacy policy just dropped. "We sell the ability to target advertising based on information we gather about you", but somehow that's consistent with "We do not sell and will not sell your information". Specific sense of "sell" or "information" or both?
Tweet media one
Tweet media two
Tweet media three
22
141
467
4. The authors acknowledge some of the ways in which predictive policing has "stirred controversy" but claim to have "demonstrate[d] their unprecedented ability to audit enforcement biases". >>
4
26
446
On professional societies not giving academic awards to harassers, "problematic faves", or bigots, a thread: /1
3
126
462
False. Because even if a person tells you something false, they're still saying it for a reason and you can learn from that. But what's the value in stochastic remixes of unknown training data tuned towards "this makes raters happy"?
@Noahpinion
Noah Smith 🐇🇺🇸🇺🇦
1 year
In general, ChatGPT is more useful in terms of pointing me in the direction of ideas and facts than in giving me the final word on them. It's like asking a human friend to explain stuff -- sometimes they get it wrong, but you get exposed to knowledge.
6
13
167
16
68
457
Those "enforcement biases" have to do with sending more resources to respond to violent crime in affluent neighborhoods. They claim that this would allow us to "hold states accountable in ways inconceivable in the past". >>
3
36
427
Define 'we'. Also, I knew that CS types had short memories, but really?
@fchollet
François Chollet
1 month
We really haven't thought through the long-term negative externalities of LLMs.
67
58
761
7
76
449
Just sayin': We wrote a whole paper in late 2020 (Stochastic Parrots, published in 2021) pointing out that this head-long rush to ever larger language models without considering risks was a bad thing. But the risks and harms have never been about "too powerful AI". >>
5
94
438
The tendency of AI researchers to equate the form of an artifact with its meaning is seemingly boundless. A college degree is not comprised of essays and exam papers, even if such elements play a key role in our evaluation of human progress towards one.
@kate_saenko_
Kate Saenko
2 years
Large language models can now write essays and solve exam questions in undergraduate STEM courses. What is next, the first AI to get a college degree?
9
3
26
9
69
442
The editor replied quickly and took out the quote ... and admitted that they'd prompted an LLM to generate the piece. I've got a few takeaways from this experience:
Has anyone heard of the news site Biharprabha? They ran an article today with a fabricated quote attributed to me. I've emailed the editor in the hopes of getting it taken down, but we'll see.
2
6
60
5
139
443
We're seeing multiple folks in #NLProc who *should know better* bragging about using #ChatGPT to help them write papers. So, I guess we need a thread of why this a bad idea: >>
13
129
442
Whose labor is being exploited? How is mass surveillance being extended and normalized? What are the impacts to the natural environment and information ecosystem? 26/
2
57
439
When you're Associate Director of something called "Human-Centered Artificial Intelligence" but the $$ all comes from Silicon Valley so you feel compelled to retweet the clown suggesting that the poors should have LLM-generated medical advice instead of healthcare.
Tweet media one
15
69
437
Do we have a name for this rhetorical move/fallacy? A: AI Hype! My system can do X! B: No, it can't. Here's why. A: So you think no computer could ever do X? -or- A: But what about future versions of it that could do X? It's super common, and it feels like it should be named.
77
60
436
Policymakers: Don't waste your time on the fantasies of the techbros saying "Oh noes, we're building something TOO powerful." Listen instead to those who are studying how corporations (and govt) are using technology (and the narratives of "AI") to concentrate and wield power. >>
3
122
432
Step 1: OpenAI chief scientist says something ridiculous on Twitter (to his 81.9k followers):
@ilyasut
Ilya Sutskever
2 years
it may be that today's large neural networks are slightly conscious
453
560
3K
11
73
433
A quick thread on #AIhype and other issues in yesterday's Gemini release: 1/
8
144
425