emilymbender Profile Banner
@emilymbender.bsky.social Profile
@emilymbender.bsky.social

@emilymbender

Followers
57K
Following
48K
Media
1K
Statuses
30K

Prof, Linguistics, UW // Faculty Director, CLMS // she/her // @[email protected] & bsky // rep by @ianbonaparte

Joined July 2010
Don't wanna be here? Send us removal request.
@emilymbender
@emilymbender.bsky.social
2 years
Mystery AI Hype Theater is now available in podcast form!. @alexhanna and I started this project as a one-off, trying out a new way of responding to and deflating AI hype. and then surprised ourselves by turning it into a series.
Tweet card summary image
buzzsprout.com
Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation....
14
71
274
@emilymbender
@emilymbender.bsky.social
9 months
Sunday's thread on why chatbots & LLMs are a bad solution for information access, with replies to the most common types of counterarguments I encountered in my mentions.
Tweet card summary image
buttondown.com
By Emily This post started off as a thread I wrote and posted across social media on Sunday evening. I'm reproducing the thread (lightly edited) first and...
3
7
32
@emilymbender
@emilymbender.bsky.social
9 months
Sunday's thread as a newsletter post, with replies to the most common types of counterarguments I encountered in my mentions.
Tweet card summary image
buttondown.com
By Emily This post started off as a thread I wrote and posted across social media on Sunday evening. I'm reproducing the thread (lightly edited) first and...
@emilymbender
@emilymbender.bsky.social
9 months
As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access. A thread, with links:. >>.
0
4
23
@emilymbender
@emilymbender.bsky.social
9 months
RT @emilymbender: The chatbot interface invites you to just sit back and take the appealing-looking AI slop as if it were "information". Do….
0
36
0
@emilymbender
@emilymbender.bsky.social
9 months
RT @emilymbender: As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots a….
0
1K
0
@emilymbender
@emilymbender.bsky.social
9 months
The chatbot interface invites you to just sit back and take the appealing-looking AI slop as if it were "information". Don't be that guy. /fin.
21
36
298
@emilymbender
@emilymbender.bsky.social
9 months
But now more than ever we all need to level-up our information access practices and hold high expectations regarding provenance --- i.e. citing of sources. >>.
2
11
191
@emilymbender
@emilymbender.bsky.social
9 months
Finally, the chatbots-as-search paradigm encourages us to just accept answers as given, especially when they are stated in terms that are both friendly and authoritative. >>.
3
24
209
@emilymbender
@emilymbender.bsky.social
9 months
The case of the discussion forum has a further twist: Any given piece of information there is probably one you'd want to verify from other sources, but the opportunity to connect with people going through similar medical journeys is priceless. >>.
2
11
149
@emilymbender
@emilymbender.bsky.social
9 months
If instead you get an answer from a chatbot, even if it is correct, you lose the opportunity for that growth in information literacy. >>.
4
10
211
@emilymbender
@emilymbender.bsky.social
9 months
If you have the underlying links, you have the opportunity to evaluate the reliability and relevance of the information for your current query --- and also to build up your understanding of those sources over time. >>.
7
11
182
@emilymbender
@emilymbender.bsky.social
9 months
Imagine putting a medical query into a standard search engine and receiving a list of links including one to a local university medical center, one to WebMD, one to Dr. Oz, and one to an active forum for people with similar medical issues. >>.
2
12
154
@emilymbender
@emilymbender.bsky.social
9 months
That sense-making includes refining the question, understanding how different sources speak to the question, and locating each source within the information landscape. >>.
3
16
215
@emilymbender
@emilymbender.bsky.social
9 months
Setting things up so that you get "the answer" to your question cuts off the user's ability to do the sense-making that is critical to information literacy. >>.
3
40
349
@emilymbender
@emilymbender.bsky.social
9 months
But even if the chatbots on offer were built around something other than LLMs, something that could reliably get the right answer, they'd still be a terrible technology for information access. >>.
3
11
166
@emilymbender
@emilymbender.bsky.social
9 months
Furthermore, a system that is right 95% of the time is arguably more dangerous tthan one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%. >>.
11
70
501
@emilymbender
@emilymbender.bsky.social
9 months
If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance. >>.
7
39
319
@emilymbender
@emilymbender.bsky.social
9 months
Why are LLMs bad for search? Because LLMs are nothing more than statistical models of the distribution of word forms in text, set up to output plausible-sounding sequences of words. >>.
5
69
424
@emilymbender
@emilymbender.bsky.social
9 months
As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access. A thread, with links:. >>.
116
1K
5K