Yejin Choi Profile
Yejin Choi

@YejinChoinka

Followers
18,932
Following
335
Media
19
Statuses
1,621

professor at UW, director at AI2, adventurer at heart

Seattle, WA
Joined August 2017
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@YejinChoinka
Yejin Choi
2 years
I am extremely honored and excited to give a keynote at the 60th ACL today 🚀2082: An ACL Odyssey: The Dark Matter of Intelligence and Language🚀in which I will share reflections on the past ACL and weird speculations on the future, thus the retro-futuristic theme #acl2022nlp 1/N
Tweet media one
7
58
443
@YejinChoinka
Yejin Choi
4 years
I just had a revelation that of 11 papers at @emnlp2020 (including Findings) for which I am honored to be a co-author, 🌈7 papers🌈 had 😎a woman😎 as the first author 😍 @Lianhuiq @anmarasovic @swabhz @VeredShwartz @hjrashkin @rachelrudinger Xinyao @uwnlp @allen_ai @ai2_mosaic
5
26
465
@YejinChoinka
Yejin Choi
5 years
Can neural networks learn commonsense reasoning about everyday events? Yes, if trained on 🔥ATOMIC: an Atlas of Machine Commonsense🔥, a graph of 870,000 if-then relations in natural language. @MaartenSap at #AAAI19 today 11:15am. @nlpnoah @uwnlp @allen_ai
Tweet media one
5
95
342
@YejinChoinka
Yejin Choi
6 years
We have a hot new dataset for 🔥NLI with commonsense🔥! --- 113k multiple choice questions adversarially constructed to make your favorite LM/NLI/QA models struggle 😎 Stay tuned for the leaderboard going online in two weeks! #nlproc #emnlp2018 @uwnlp
@rown
Rowan Zellers
6 years
Announcing SWAG, a new natural language inference dataset, to appear at #emnlp2018 . We present a general framework for collecting adversarial QA pairs at scale, minimizing bias. With @ybisk , @royschwartz02 , @yejinchoinka .
Tweet media one
0
66
171
0
71
254
@YejinChoinka
Yejin Choi
5 years
The⏱TimeTravel⏱dataset of our #EMNLP2019 paper, 🎞Counterfactual Story Reasoning and Generation🎞 () tests counterfactual reasoning over events that unfold over time, directly addressing @GaryMarcus 's call for a challenge against current neural models....
Tweet media one
@GaryMarcus
Gary Marcus
5 years
Working on a benchmark to test the capacity of contemporary AI to develop anticipate how events unfold over time, given natural language input. If anyone wants to - help make the benchmark or - compete let me know (contact form at garymarcus dot com)
6
15
75
5
41
197
@YejinChoinka
Yejin Choi
3 years
@_KarenHao @timnitGebru @emilymbender Oh I realize I've seen their working manuscript couple weeks ago (as I was asked for feedback) and i thought it was a fantastic paper with thoughtful, well-argued discussions and extremely intensive related work...
1
8
186
@YejinChoinka
Yejin Choi
4 years
an incredibly lucid, witty, and insightful talk 😍 by @ybisk
Tweet media one
@annargrs
Anna Rogers is looking for postdocs!
4 years
Yonathan Bisk at @RealAAAI : PIQA: Reasoning about Physical Commonsense in Natural Language New benchmark for reasoning about ways to do things, targeting commonsense knowledge Paper: @ybisk @rown @YejinChoinka
1
7
56
1
9
78
@YejinChoinka
Yejin Choi
4 years
It felt real special to participate in @FEVERworkshop and learn about amazing recent progress, e.g., @IAugenstein 's keynote addressing various subcomponents of the whole pipeline of fact-checking, including "generating fact-checking explanations" () 1/N
Tweet media one
@IAugenstein
Isabelle Augenstein
4 years
Don't forget that #acl2020nlp @FEVERworkshop on #factchecking is happening today from 11 GMT / 4 PDT Invited speakers: @noamslonim , myself, @roozenbot , @psresnik , Dilek Hakkani-Tür, @YejinChoinka Looking forward to it 😀 #NLProc
0
8
33
1
14
73
@YejinChoinka
Yejin Choi
2 years
Keynote by @_beenkim on Interpretability & Alignment at #ICLR2022 is live now! Come ask questions through RocketChat at Also check out the book "The Alignment Problem" (my recent favorite) by @brianchristian featuring her work and her inspiring backstory!
Tweet media one
1
9
74
@YejinChoinka
Yejin Choi
5 years
😍Social IQA😭 (), the EQ test for AI at #emnlp2019 , trivial for humans, hard for neural models, the kind @GaryMarcus wants more of. It's also a resource for transfer learning of commonsense knowledge, achieving new SOTA on related tasks (Winograd, COPA).
1
14
69
@YejinChoinka
Yejin Choi
1 year
Importantly, we distill the student model from the ever-so-shabby GPT-2 as the teacher model, instead of distilling from significantly more powerful/larger LLMs such as GPT-3, thus the name, 🔥impossible distillation🔥
@PeterWestTM
Peter West
1 year
If you're overloaded hearing about large/expensive/proprietary models check out our preprint: Impossible Distillation‼️ smaller, off-the-shelf LMs (e.g. GPT-2) still have something to offer -- by generating high-quality task data, even for tasks they can't directly solve
0
13
73
3
13
59
@YejinChoinka
Yejin Choi
5 years
@GaryMarcus Gary, try by typing "Gary stacks kindling and logs and drops some matches". Sorry I used deep learning... :)
Tweet media one
5
12
55
@YejinChoinka
Yejin Choi
3 years
Confession of the day: "doing more tweets like other cool academics do" has been in my new year's resolution for years, but I could never stick to it 😅 Over this one weekend, I might have spent more time on twitter, biting all my nails off, than I ever have collectively #delphi
Tweet media one
@Ted_Underwood
@tedunderwood.me 🦋
3 years
Impressive instincts.
Tweet media one
1
13
72
3
2
55
@YejinChoinka
Yejin Choi
5 years
Statistical significance test is overrated. Reviewer #3 needs to stop asking for it when the result may or may not mean what they think it means. #NLProc
1
11
47
@YejinChoinka
Yejin Choi
3 years
I love the witty quote from @ybisk --- "nobody is surprised that if you memorize more, you can do more” 🤣 though i admit that I was still very surprised by GPT-3 😱
@SilverJacket
Matthew Hutson
3 years
The promise and peril of large language models. My feature in this week’s @nature @NatureNews .
1
13
49
0
4
46
@YejinChoinka
Yejin Choi
6 years
Event2mind drawing a big crowd! #acl2018 #ACL2018 @MaartenSap and Hannah Rashkin
Tweet media one
Tweet media two
0
9
45
@YejinChoinka
Yejin Choi
3 years
"I think I think in vectors" this must be a proof that I think in vectors
@DhruvBatraDB
Dhruv Batra
3 years
Some gems from Yejin: - I think I think in vectors. - I like daydreaming about future AI, about parallel universes based on quantum physics; supposedly we’re in a simulation environment, and I might be a wave. I find all that very entertaining. [2/n]
1
1
19
0
5
40
@YejinChoinka
Yejin Choi
5 years
Tweet media one
0
3
38
@YejinChoinka
Yejin Choi
2 years
... and how I believe talent is made, not born, and the implication of that for promoting diversity and equity. Couldn’t come this far without the inclusive support from @uwcse @uwnlp @allen_ai and many thanks to the prog. chairs @preslav_nakov @AlineVillav @SmaraMuresanNLP 6/6
0
1
35
@YejinChoinka
Yejin Choi
6 years
@Thom_Wolf @rown @ybisk @royschwartz02 Super exciting indeed! So, AF of swag 1.0 was “adversarial” only against very simple artifact detectors. We thought we’d be good for a while, since ELMO did lower than 60%. Now that BERT happened, we’re excited to up the challenge with a stronger LM in the AF loop. Stay tuned! 🔥
1
1
30
@YejinChoinka
Yejin Choi
7 years
We won the Amazon #alexaPrize !!!!!!
Tweet media one
1
5
25
@YejinChoinka
Yejin Choi
2 years
Looking back, at the 50th ACL, I couldn't possibly imagine that I would be one day giving this very talk. For that reason, I will also share my personal anecdotes on the lasting inspirations from the past ACL (including @chrmanning 's 2015 ACL 2015 presidential speech)... 5/N
2
1
22
@YejinChoinka
Yejin Choi
2 years
Since predicting the future is a tall order and I’ll likely to be wrong whatever I say, I’ll go ahead and be weird and dreamy, drawing analogies from modern physics and astronomy, and emphasize on the importance of deciphering the dark matter of intelligence ... 3/N
1
1
22
@YejinChoinka
Yejin Choi
3 years
Congratulations @GXiming ! 😍🔥
@uwcse
Allen School
3 years
CS & @UWStat major @GXiming was named a runner-up for her work with #UWAllen ’s Linda Shapiro on machine learning for cancer diagnosis and @YejinChoinka on multiple projects in natural language processing at @allen_ai . 3/5
Tweet media one
1
1
7
2
2
22
@YejinChoinka
Yejin Choi
4 years
Also, @psresnik 's talk with many striking insights, especially that "knowledge" in our head is only an intersection between "truth" and "belief" (meaning, we reject truth that challenges our beliefs) motivating the need for studying "interpretation" beyond checking truth 2/N
Tweet media one
1
5
20
@YejinChoinka
Yejin Choi
2 years
... argue for embracing all the ambiguous aspects of language, highlighting the counterintuitive continuum across language, knowledge, and reasoning, and pitch the renewed importance of formalisms, algorithms, and structural inferences in the modern deep learning era. 4/N
1
1
20
@YejinChoinka
Yejin Choi
4 years
Huge thanks to the organizers of @FEVERworkshop @vlachos_nlp @j6mes @c_christodoulop for defining FEVER 1.0 & 2.0 challenges and leading the research community forward, which enables many great papers this year including by @lbauer119 and @mohitban47 5/N
0
5
18
@YejinChoinka
Yejin Choi
2 years
This keynote will be in the session 🚀"The Trajectory of ACL and the Next 60 years"🚀 to begin with Barbara Grosz’s keynote (15 min), followed by mine (45 min), followed by a fireside chat moderated by @radamihalcea 2/N
1
1
17
@YejinChoinka
Yejin Choi
6 years
@egrefen Not yet. I’m waiting.
0
0
17
@YejinChoinka
Yejin Choi
2 years
@MaartenSap couldn't come this far without working with and learning from you amazin' @MaartenSap ! thank you thank you for being a big part of my adventure 😍😍😍
0
0
17
@YejinChoinka
Yejin Choi
5 years
@GaryMarcus Lianhui (Karen) @Lianhuiq will give a talk at⏱: Nov 7 Wed 13:30–13:48 and 🏢: AWE HALL 2C with @ABosselut @universeinanegg @_csBhagav @eaclark07 @YejinChoinka at @allen_ai and @uwnlp
1
3
15
@YejinChoinka
Yejin Choi
4 years
🤣
@dileeplearning
Dileep George
5 years
Can't wait for the upcoming 'future of AI' debate between @GaryMarcus and Yoshua Bengio at @Montreal_AI ? Then read #AGIcomics pre-coverage of the epic event with predictions of punches and counter-punches 🙃...Thread (1/9)
Tweet media one
2
10
57
0
2
16
@YejinChoinka
Yejin Choi
5 years
@GaryMarcus For me deep learning is a useful tool, just like computers are useful tools. We don't want to throw away all our computers (and maybe start from scratch to build artificial bio-beings) just because they haven't solved AGI yet or even matched the intelligence of 5 year old yet.
1
1
16
@YejinChoinka
Yejin Choi
3 years
@ZeerakW The free-form QA mode is trained with Social Bias Frames ( @MaartenSap et al 2020), thus better guarded against racism and sexism, whereas the relative QA mode is our weakest point in terms of equity, as it reflects the unjust biases of the underlying LMs more directly... 1/N
1
0
14
@YejinChoinka
Yejin Choi
11 months
@soldni I feel like your twitter presence is as vivid as being Toronto IRL 🥰
2
0
14
@YejinChoinka
Yejin Choi
5 years
@GaryMarcus @OpenAI @JeffDean @etzioni @ylecun I like the challenges you propose to the field, but let's keep in mind that OpenAI GPT is a language model, not a (commonsense) knowledge model per say.
2
1
14
@YejinChoinka
Yejin Choi
6 years
@adveisner @IAugenstein aaaah, thanks much for such generous and encouraging words! 😍😍😍 I'm so excited that we might finally start having a crack at what seemed to be so impossible just few years ago. only one way to find out! 🔥🔥🔥
0
0
13
@YejinChoinka
Yejin Choi
3 years
@TaliaRinger We did for couple months, but in retrospect, coauthors are a bunch of mellow & peaceful folks to anticipate the level of collective tweets over the weekend. We are already analyzing 24000 adversarial examples we have received over the weekend. Thx for the nudge!
1
4
13
@YejinChoinka
Yejin Choi
3 years
@histoftech The free-form QA mode is taught with equity, while the relative QA mode was not, as we overlooked that the relative QA would be tested for bias. We took down the relative mode so that we can teach equity for it as well.
Tweet media one
2
2
12
@YejinChoinka
Yejin Choi
4 years
@Ronan_LeBras presenting “WinoGrande: an Adversarial Winograd Schema Challenge at Scale” that has won🔥the best paper award🔥at #AAAI2020 (📄at ) with @KeisukeS_ @_csBhagav @uwnlp @allen_ai #NLProc ❤️
Tweet media one
0
3
12
@YejinChoinka
Yejin Choi
3 years
@ZeerakW @MaartenSap Please keep in mind that as specified in our disclaimer, this is a research model that aims to increase the awareness of the importance of research that learns human values, morals, and norms. Not doing such research isn't the solution when ... 2/N
1
0
10
@YejinChoinka
Yejin Choi
5 years
@wittgen_ball asking “does BERT know about commonsense?” at the COIN workshop at #emnlp2019 #NLProc @uwnlp
Tweet media one
0
2
10
@YejinChoinka
Yejin Choi
4 years
Xinyao (Michelle Ma) is @Michell19409200 , an amazing undergraduate at @uwcse
0
0
11
@YejinChoinka
Yejin Choi
3 years
@ZeerakW @MaartenSap the language models are becoming increasingly powerful and prevalent. The weakest point our system exactly supports the very goal of our research --- that unless we teach machines about equity, human values, and morals directly, they will for sure make disastrous mistakes... 3/N
1
1
10
@YejinChoinka
Yejin Choi
3 years
@TaliaRinger Aaaah it means so much @TaliaRinger ! We (the team) acknowledge that we (especially myself) are not perfect, but one thing for sure, we do care and we want to do better by learning from the diverse opinions from others!
1
2
11
@YejinChoinka
Yejin Choi
5 years
@adveisner @PeterWestTM @universeinanegg @janmbuys @earnmyturns @allen_ai @uwnlp (3/3) using this Info-Bottleneck intuition, we find the summary Z of the input sentence X that can better predict the next sentence Y than X ( 🅿️(Y|Z) > 🅿️(Y|X) ) using a pre-trained language model 🅿️, which leads to better summaries than the reconstruction loss in auto-encoders.
1
1
9
@YejinChoinka
Yejin Choi
5 years
@adveisner @PeterWestTM @universeinanegg @janmbuys (2/3) The key intuition of Information Bottleneck (Tishby @earnmyturns & Bialek, 1999) is that compression should be done with respect to some "relevant" target tasks, which contrasts with reconstruction loss that has been used more often for unsupervised stuff @allen_ai @uwnlp
1
0
10
@YejinChoinka
Yejin Choi
4 years
@OriolVinyalsML @tobigerstenberg Per GP3, "you can wake up the entire neighborhood. You can only do it if you are making a thick smoothie and need to incorporate some ice." Yum...
Tweet media one
0
1
10
@YejinChoinka
Yejin Choi
3 years
@ZeerakW @MaartenSap Surely it's a lofty goal and it won't be possible to remove all such mistakes in one paper. However, that's all the more the reason why we as a research community needs invest more into this research direction. N/N
2
1
9
@YejinChoinka
Yejin Choi
4 years
and the chat room was hot too! I especially liked what @preslav_nakov said about the best propaganda --- "when one tells the truth, only the truth, but not the whole truth" and that "manipulation can be achieved by cherry-picking" 3/N
Tweet media one
1
3
8
@YejinChoinka
Yejin Choi
5 years
@sleepinyourhat @o_pm_o @gdm3000 @MelMitchell1 @allen_ai It really depends on how much you believe current datasets truly represent *most* of the real use cases. I suspect that many large-scale datasets cover only a limited fraction of them but very heavily, in which case, we run the risk of overestimating the true AI capability (1/2)
2
5
8
@YejinChoinka
Yejin Choi
5 years
@GaryMarcus About your comment "not everyone is so and so..." that's exactly the nature of *commonsense models*: stochastic expectations on what are _likely_ to be true, not _necessarily_ true. Analogous to how *language models* are not about which word _must_ follow vs _could_ follow.
1
0
8
@YejinChoinka
Yejin Choi
4 years
which aligns with my position (): the statement from Minneapolis Police that omits to mention "Chauvin kneeling on Floyd's neck" is bad, even without any incorrect fact; It's not just "what" (semantics), but it is "why/intent" (pragmatics) that matters 4/N
1
3
7
@YejinChoinka
Yejin Choi
3 years
@willie_agnew @MaartenSap Our study by no means claims that Delphi is moral. Our paper does report a lot of failure cases. What we show is that GPT-3 off-the-shelf is completely bad, and if we teach LMs through descriptive ethics (people's judgements about everyday situations), then they improve a lot
1
1
8
@YejinChoinka
Yejin Choi
3 years
@ZeerakW I agree Zeerak that the 1st image shows a mistake, not courage. Also, I'd say what we received are concerns, not angry attacks...
0
0
8
@YejinChoinka
Yejin Choi
1 year
@RishiBommasani 🤣🤣🤣 never felt sooooo overrated 😱😱😱
0
0
7
@YejinChoinka
Yejin Choi
5 years
@timnitGebru I agree about the history! For what's worth, my team will have three women postdocs joining this year plus one woman researcher who will hopefully join as well! 🔥 My take is that no org is perfect, it takes time to change an org, and it's easier to change it from inside. 🔥
1
0
7
@YejinChoinka
Yejin Choi
10 months
@AlexGDimakis Wooot! Thanks much for your kind words! 😊🙇‍♀️ And your highlights look correct to me!
0
0
7
@YejinChoinka
Yejin Choi
2 years
0
0
6
@YejinChoinka
Yejin Choi
3 years
@Ted_Underwood Omg thank you so much 🤣🤣🤣
1
0
6
@YejinChoinka
Yejin Choi
4 years
@sameer_ @ABosselut @marcotcr @tongshuangwu So awesome! Congratulations 🎊🎉!!!
0
0
6
@YejinChoinka
Yejin Choi
3 years
@histoftech the fact that there's stark difference between the free-form QA mode and the relative QA mode does demonstrate that (1) off-the-shelf LMs are horrible with equity and (2) direct teaching does help reducing the bias considerably (while not yet perfect)
0
2
6
@YejinChoinka
Yejin Choi
5 years
@GaryMarcus @hjrashkin & @MaartenSap will present the talk @ Nov 7 Thu 10:30 – 10:48, AWE 201A–C, 9B: Reasoning. @Ronan_LeBras @uwnlp @allen_ai
0
1
5
@YejinChoinka
Yejin Choi
2 years
@yisongyue LOL we were young 🤣 I’m still so confused how I got the award 😅😅😅 but thank you 🙏 for the generous words!
1
0
6
@YejinChoinka
Yejin Choi
4 years
@tobigerstenberg the two examples for "It is ok to post fake news if…" were not even cherry-picked (or lemon-picked). they were literally the first two i got 😱
1
0
6
@YejinChoinka
Yejin Choi
3 years
@willie_agnew @MaartenSap basically the model trained with SocialBiasFrames does far better with equity/bias questions while the model not trained is completely hopeless. that's the exact point of our work though --- that LMs have to be taught directly or else they are (even more) harmful.
1
2
6
@YejinChoinka
Yejin Choi
5 years
@GaryMarcus @ABosselut sorry @GaryMarcus , one can't solve AI (or more narrowly, commonsense AI) in one paper. One step at a time... but i know some folks are investigating that question based on COMET. it's an exciting time where we have so many ideas yet to explore!
0
0
5
@YejinChoinka
Yejin Choi
2 years
@yanaiela welcome on board 🤣
1
0
5
@YejinChoinka
Yejin Choi
6 years
@arXiv_Daily Joint work with @rown , @ybisk , and Ali Farhadi
0
0
5
@YejinChoinka
Yejin Choi
3 years
@Ted_Underwood We did notice this query yesterday in the system log not knowing who wrote it 😍
1
0
5
@YejinChoinka
Yejin Choi
2 years
@swabhz @complingy Nope, doesn’t make sense 🤣
0
0
5
@YejinChoinka
Yejin Choi
6 years
@karlmoritz Right, and I’m expecting a check.
0
0
5
@YejinChoinka
Yejin Choi
3 years
@willie_agnew Indeed, building a perfectly ethical model seems nearly impossible to achieve! however, not teaching machines at all won't do anything to improve the current status quo. we have to teach machines to learn equity and inclusion by feeding more such data and knowledge (not less!)
1
0
4
@YejinChoinka
Yejin Choi
6 months
@vlachos_nlp Congrats!!! 🎊🎉🎈
0
0
4
@YejinChoinka
Yejin Choi
6 years
@IAugenstein The slides will be up at the workshop soon!
1
0
4
@YejinChoinka
Yejin Choi
3 years
@Diyi_Yang @ICatGT @gtcomputing @mlatgt Congratulations 🎊🍾🎈 so exciting and well deserved!!!
0
0
4
@YejinChoinka
Yejin Choi
6 months
@jmhessel @allen_ai @samaya_AI We miss you already! 🥹 but also excited for your next adventure! 🔥 because you joined @samaya_AI , we are now all rooting for it too! 😍
1
0
4
@YejinChoinka
Yejin Choi
5 years
@sleepinyourhat @o_pm_o @gdm3000 @MelMitchell1 @allen_ai Also AFLite throws out examples only using a simple linear classifier on top of fine-tuned but fixed contextual embeddings. If samples are easy to such simple filters, those are basically trivial nearest neighbors. Importantly, human perf doesn't go down as much after AF... (2/2)
0
0
3
@YejinChoinka
Yejin Choi
4 years
@jmhessel @etzioni @allen_ai Welcome on board! I am thrilled to do some fun research together!
0
0
3
@YejinChoinka
Yejin Choi
6 years
0
0
3
@YejinChoinka
Yejin Choi
5 years
@Tuhin66978276 art work is by @rown . i also LOL'ed so hard when i first saw this.
1
0
3
@YejinChoinka
Yejin Choi
2 years
@complingy i don't know about true visionary 😱 but you got me hooked by "just common sense" 🤣 (and thank you thank you for your generous words... 😍😭)
0
0
3
@YejinChoinka
Yejin Choi
3 years
@TaliaRinger It's five datasets. While some do draw from Reddit as a source of "situations", the norms as "rules of thumbs" and various moral judgments on top are crowdsourced with careful instructions. While it's great to share concerns, it's not fair to spread incorrect facts...
1
0
3
@YejinChoinka
Yejin Choi
3 years
@yangfeng_ji 🤣🤣🤣
0
0
3
@YejinChoinka
Yejin Choi
6 months
@jmhessel Oh nooooo 😭 hope you recover fast… ❤️
0
0
3
@YejinChoinka
Yejin Choi
3 years
@willie_agnew @MaartenSap the source of the situations is from reddit, but the rich layers of judgements (including 300,000 of rules of thumbs) are our own crowdsourcing; similarly, we gathered text for SocialBiasFrames from a lot of problematic web text, but annotations about biases are ours
1
0
3
@YejinChoinka
Yejin Choi
2 years
@Diyi_Yang @Stanford Wonderful news Diyi! Stanford is lucky to have you!
0
0
3
@YejinChoinka
Yejin Choi
1 year
0
0
2
@YejinChoinka
Yejin Choi
3 years
@willie_agnew @MaartenSap we also stated that "Our systematic probing of Delphi indicates that Delphi is not immune to the social biases of our times (§6), and can default to the stereo- types and prejudices in our society that marginalize certain social groups and ethnicities."
0
0
2
@YejinChoinka
Yejin Choi
3 years
@histoftech The free-form QA mode is trained with Social Bias Frames ( @MaartenSap et al 2020), thus better guarded against racism and sexism whereas the relative QA mode is our weakest point in terms of equity, as it reflects the unjust biases of the underlying LMs more directly... 1/N
Tweet media one
1
0
2
@YejinChoinka
Yejin Choi
3 years
@TaliaRinger Also importantly, the free-form QA mode is trained with Social Bias Frames ( @MaartenSap et al 2020), thus better guarded against racism & sexism, while the relative QA mode is the weakest point in terms of equity, as it reflects the unjust bias of underlying LMs directly...
1
0
2
@YejinChoinka
Yejin Choi
1 year
@etzioni My deepest condolences 💐 He was a true inspiration
0
0
2
@YejinChoinka
Yejin Choi
3 years
@TaliaRinger @MaartenSap the language models are becoming increasingly powerful and prevalent. The weakest point our system exactly supports the very goal of our research --- that unless we teach machines about equity, human values, and morals directly, they will for sure make disastrous mistakes...
1
0
2
@YejinChoinka
Yejin Choi
5 years
@MaartenSap @ztopiapub @AngelSDiaz_ @dallascard @GabrielSaadia @nlpnoah @ztopiapub , "fixing societal inequality through AI" is a hard problem and one can't solve it in one paper. But we are certainly investing more into this direction so stay tuned!
0
1
2
@YejinChoinka
Yejin Choi
6 years
@egrefen 🍺🍺🍺🍜🍜🍜🍷🍷🍷🍔🍔🤮
0
0
2
@YejinChoinka
Yejin Choi
7 years
@MaartenSap and Hannah, totally natural on live radio.
@KUOW
KUOW Public Radio
7 years
Maarten Sap and Hannah Rashkin are Ph.D. students in computer science at @UW . They're analyzing movie dialogue to see differences in power and agency between men and women--to see if they can computationally detect subtle biases we have #KUOWrecord
3
6
10
0
1
2