vrtbn Profile Banner
Vera Tobin Profile
Vera Tobin

@vrtbn

Followers
403
Following
13K
Media
68
Statuses
1K

Cognition, language, literature, occasional possums. Elements of Surprise: Our Mental Limits and the Satisfactions of Plot. https://t.co/7odHdeNphA

she/her/hers
Joined April 2017
Don't wanna be here? Send us removal request.
@vrtbn
Vera Tobin
8 years
Want to read Elements of Surprise, but worried that it will blindside you with a spoiler for the book you were about to read or film you were about to see? I've got you covered.
Tweet card summary image
veratobin.org
Find out whether, and to what extent, Elements of Surprise is going to wreck your enjoyment of a twisty little passage.
1
7
32
@EmmaTolkin
π•―π–Žπ–‘π–‰π–” π•­π–†π–Œπ–Œπ–Žπ–“π–˜
2 years
THIS IS UNFATHOMABLY SICK. Wait til Chao Tian comes in, it goes so hard. New favorite genre: Appalachinese!!! πŸͺ•πŸ‡¨πŸ‡³
160
3K
19K
@vrtbn
Vera Tobin
2 years
My lovely Cognitive Science department is hiring for a Visiting Assistant Professor position, starting January 2024 and ending December 2024. 2/2 teaching load (our undergrads are WONDERFUL), renewable for a second year.
0
1
1
@vrtbn
Vera Tobin
2 years
My lovely Cognitive Science department is hiring for a Visiting Assistant Professor position, starting January 2024 and ending December 2024. 2/2 teaching load (our undergrads are WONDERFUL), renewable for a second year.
0
1
1
@vrtbn
Vera Tobin
2 years
someone heard "little kitty, big city" but got confused @LittleKittyGame
0
0
2
@vrtbn
Vera Tobin
2 years
This makes me so sad. Also, do follow Alariko for his gorgeous, charming art.
@Alariko_
Alariko
2 years
Amazing thread on Reddit about the AI model that used my art without my permission to train their model and even named it after me. Apparently I am the bad person for wanting to get it taken down, and wanting to β€œbrigade” (??) it. https://t.co/6rzXBosVrv
0
0
3
@vrtbn
Vera Tobin
2 years
don't need to see Oppenheimer, just need the water and the well
1
1
3
@yuntiandeng
Yuntian Deng
3 years
Ever wondered how nondeterministic GPT-4 is even with greedy decoding (T=0)? I built a website that asks GPT-4 to draw a unicorn every hour and tracks if the results stay consistent over time (spoiler alert: they don't! πŸ¦„). Explore the findings: https://t.co/4VOT8Ko91m
10
41
288
@vrtbn
Vera Tobin
2 years
Frankly delighted when @AlisonGopnik told everyone at this #acl2023nlp plenary talk to go read @henryfarrell
0
0
2
@kmahowald
Kyle Mahowald
3 years
Please do some crosswords for science! https://t.co/hOlwx52NAy We're studying how people solve xwords, and we are looking for solvers of all experience levels: from novices to ACPTers. High-quality puzzles generously provided by the wonderful https://t.co/OSzFSiG8oU team.
boswords.org
3
61
118
@zhaofeng_wu
Zhaofeng Wu
2 years
We disentangle such overfitting from LMs' transferrable reasoning skills by introducing "counterfactual" variants of familiar tasks. Some of these, e.g. the drawing & music tasks, are not standard LM evaluations, but recent work has qualitatively observed these abilities in LMs.
1
1
22
@vrtbn
Vera Tobin
2 years
Someone asks a good question about how this introduces "subjectivity" (or fuzziness, variability, black box factor), detracting from the attractive "objectivity" of word embeddings. Would love a tool that exposes underlying clusters together with such qualitative representations
1
0
2
@vrtbn
Vera Tobin
2 years
An enjoyable talk about a new approach to some of the same kinds of analysis we often do with Word2Vec and the like, with a focus on cases of semantic change (something of special interest to me right now). Interested to see where things go with this work!
@glnmario
Mario Giulianelli
3 years
Exciting collaboration with @northernkender, Iris Luden (MSc Logic student in Amsterdam🚨), and @raquel_dmg Looking forward to discussing the paper both before and during #ACL2023NLP :)
1
0
3
@vrtbn
Vera Tobin
2 years
Intriguing prospects! Looking forward to reading these.
@afra_amini
Afra Amini
2 years
Are you a big fan of structure? Have you ever wanted to apply the latest and greatest large language model out-of-the-box to parsing? Are you a secret connoisseur of linear-time dynamic programs? If you answered yes, our outstanding #ACL2023NLP paper may be just right for you!
0
0
1
@vrtbn
Vera Tobin
2 years
Very enjoyable and crunchy dive into the gaps, imbalances, and other issues in comparing human and machine performance on various tasks, with a very engaging presentation too, at #ACL2023NLP
@SimoneTedeschi_
Simone Tedeschi
3 years
πŸ“’ NEW #ACL2023 PAPER "What's the meaning of superhuman performance in today's NLU?" πŸ“Š w/ @JohanBos @ThierryDeclerck @HajicJan @daniel_hers @ehovy @alkoller @SimonKrek @StevenSchockae2 @RicoSennrich @KatiaShutova @RNavigli @Babelscape #NLProc πŸ“‘ https://t.co/rUW7wiRmMQ
0
2
6
@ArlieColes
Arlie Coles
2 years
@yoavgo Comparing, even equating, LLMs' language understanding with the understanding of a language-impaired person is not something I would like to see more of.
1
3
9