Vera Tobin
@vrtbn
Followers
403
Following
13K
Media
68
Statuses
1K
Cognition, language, literature, occasional possums. Elements of Surprise: Our Mental Limits and the Satisfactions of Plot. https://t.co/7odHdeNphA
she/her/hers
Joined April 2017
Want to read Elements of Surprise, but worried that it will blindside you with a spoiler for the book you were about to read or film you were about to see? I've got you covered.
veratobin.org
Find out whether, and to what extent, Elements of Surprise is going to wreck your enjoyment of a twisty little passage.
1
7
32
THIS IS UNFATHOMABLY SICK. Wait til Chao Tian comes in, it goes so hard. New favorite genre: Appalachinese!!! πͺπ¨π³
160
3K
19K
My lovely Cognitive Science department is hiring for a Visiting Assistant Professor position, starting January 2024 and ending December 2024. 2/2 teaching load (our undergrads are WONDERFUL), renewable for a second year.
0
1
1
My lovely Cognitive Science department is hiring for a Visiting Assistant Professor position, starting January 2024 and ending December 2024. 2/2 teaching load (our undergrads are WONDERFUL), renewable for a second year.
0
1
1
This makes me so sad. Also, do follow Alariko for his gorgeous, charming art.
Amazing thread on Reddit about the AI model that used my art without my permission to train their model and even named it after me. Apparently I am the bad person for wanting to get it taken down, and wanting to βbrigadeβ (??) it. https://t.co/6rzXBosVrv
0
0
3
Ever wondered how nondeterministic GPT-4 is even with greedy decoding (T=0)? I built a website that asks GPT-4 to draw a unicorn every hour and tracks if the results stay consistent over time (spoiler alert: they don't! π¦). Explore the findings: https://t.co/4VOT8Ko91m
10
41
288
Frankly delighted when @AlisonGopnik told everyone at this #acl2023nlp plenary talk to go read @henryfarrell
0
0
2
Please do some crosswords for science! https://t.co/hOlwx52NAy We're studying how people solve xwords, and we are looking for solvers of all experience levels: from novices to ACPTers. High-quality puzzles generously provided by the wonderful https://t.co/OSzFSiG8oU team.
boswords.org
3
61
118
We disentangle such overfitting from LMs' transferrable reasoning skills by introducing "counterfactual" variants of familiar tasks. Some of these, e.g. the drawing & music tasks, are not standard LM evaluations, but recent work has qualitatively observed these abilities in LMs.
1
1
22
Oops, PS: these tweets about https://t.co/P8y6dPo91Y should have included #ACL2023NLP
arxiv.org
We propose using automatically generated natural language definitions of contextualised word usages as interpretable word and word sense representations. Given a collection of usage examples for a...
0
0
0
Someone asks a good question about how this introduces "subjectivity" (or fuzziness, variability, black box factor), detracting from the attractive "objectivity" of word embeddings. Would love a tool that exposes underlying clusters together with such qualitative representations
1
0
2
An enjoyable talk about a new approach to some of the same kinds of analysis we often do with Word2Vec and the like, with a focus on cases of semantic change (something of special interest to me right now). Interested to see where things go with this work!
Exciting collaboration with @northernkender, Iris Luden (MSc Logic student in Amsterdamπ¨), and @raquel_dmg Looking forward to discussing the paper both before and during #ACL2023NLP :)
1
0
3
Intriguing prospects! Looking forward to reading these.
Are you a big fan of structure? Have you ever wanted to apply the latest and greatest large language model out-of-the-box to parsing? Are you a secret connoisseur of linear-time dynamic programs? If you answered yes, our outstanding #ACL2023NLP paper may be just right for you!
0
0
1
Very enjoyable and crunchy dive into the gaps, imbalances, and other issues in comparing human and machine performance on various tasks, with a very engaging presentation too, at #ACL2023NLP
π’ NEW #ACL2023 PAPER "What's the meaning of superhuman performance in today's NLU?" π w/ @JohanBos @ThierryDeclerck @HajicJan @daniel_hers @ehovy @alkoller @SimonKrek @StevenSchockae2 @RicoSennrich @KatiaShutova @RNavigli @Babelscape #NLProc π https://t.co/rUW7wiRmMQ
0
2
6
@yoavgo Comparing, even equating, LLMs' language understanding with the understanding of a language-impaired person is not something I would like to see more of.
1
3
9