jacob lou hoo vigly ⍼
@postylem
Followers
112
Following
514
Media
67
Statuses
409
computational linguistics / cogsci. phd @mcgillu linguistics / @Mila_Quebec. mul on täisnimel neli o’d. @[email protected] @postylem.bsky.social
Montréal, Québec
Joined January 2008
🚨ATTN Natural Stories users!🚨 We found a misalignment in the self-paced reading times. Everything is off by one position. In the released dataset, the SPR RTs for the word at index t are actually for index t+1. If you are using the dataset, please use the realigned data.
2
13
23
Once, we ran a study on Prolific and a participant wrote on Reddit that the study “Felt like I was losing the will to live.” I went on the Prolific Subreddit (24k members!) and asked what matters. Here is what they told me. A thread on happier participants and better studies 1/9
12
140
474
Now that the eclipse is within the short range, global ensembles add less value while more high resolution models become available. I switched my eclipse page to use the High Resolution Ensemble Forecast (HREF), including a cloud layer breakdown: https://t.co/ygXzAsTEhf
11
46
246
It was an amazing experience to film our music video at the beautiful historic @est8ofmind Oakhurst Manor back in November. Very sadly, I learned that they suffered a devastating fire there on March 1st. Thankfully and most importantly no one was injured. But they are - (1/2)
1
1
1
It was so much fun to be a part of this production.
The video for Dancing With A Shadow is out now. Starring: Katherine Bickford & Jacob Louis Hoover @postylem It was an incredible experience working with a great cast and crew. Stephanie Houten (Director) and Dara Nicole Capley (Choregrapher), thank you. https://t.co/4HxodATsaj
1
0
3
Need a reminder about how to report effect size or confidence intervals for your frequentist stats model? This looks very promising resource:
matthewbjane.quarto.pub
0
0
0
🔔🌟 New Preprint Alert 🔔🌟 “An Information-Theoretic Analysis of Targeted Regressions during Reading” with @tpimentelms , @clara__meister , @ryandcotterell - Psycholinguistics 🧠 Computational Modeling 🤖 Crosslinguistic Studies 🌍 Information Theory 📡
1
9
43
Now out in @PNASNews! Large-scale reading evidence that next-word predictability effects in humans are driven by *inference* (logarithmic in predictability) rather than preactivation (linear in predictability). https://t.co/qZLNXp7FQh 1/10
pnas.org
During real-time language comprehension, our minds rapidly decode complex meanings from sequences of words. The difficulty of doing so is known to ...
2
44
144
Have you ever done a dense grid search over neural network hyperparameters? Like a *really dense* grid search? It looks like this (!!). Blueish colors correspond to hyperparameters for which training converges, redish colors to hyperparameters for which training diverges.
272
2K
10K
no, that’s not the kind of latex support i want
👟 💯 Experience the cushioned luxury of Latex Sport Insoles, enhancing your comfort and performance in every sport. Get it👉 https://t.co/ZeFV95kumF
0
0
0
👋I boost a lot of job opportunities on here and now it's time to boost my own! I'll be arriving at #Stanford in fall of 2024 and I'm looking for awesome people to help me figure out language. 🧵👇
4
72
206
so yeah, I think there is no correct answer. (what I like about this is it feels like the last option is for folks who think there’s no correct answer, but of course it is still paradoxical to choose that one)
0
0
0
What is next in this sequence: 1. What's happening? 2. What is happening?! 3. ...
0
0
0
If you answer this poll randomly (weighting all choices equally), what is the probability that you are correct?
1
0
0
🚨🚨New Paper Announcement (to appear in TACL) 📜 from me, @tpimentelms, @clara__meister, @ryandcotterell and @roger_p_levy: Testing the Predictions of Surprisal Theory in 11 Languages https://t.co/vA2q3WC2o7 🌎
arxiv.org
A fundamental result in psycholinguistics is that less predictable words take a longer time to process. One theoretical explanation for this finding is Surprisal Theory (Hale, 2001; Levy, 2008),...
1
16
52
How can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? New paper with Lio Wong, @alexanderklew, @noahdgoodman, @vmansinghka, @jacobandreas, and Josh Tenenbaum. https://t.co/0aFsD2Ihr9
arxiv.org
How does language inform our downstream thinking? In particular, how do humans make meaning from language--and how can we leverage a theory of linguistic meaning to build machines that think in...
3
54
199
Thanks to several hundred of you for doing some crosswords for science! We’re excited to start some analysis soon but if you haven’t yet, please enjoy some puzzles: https://t.co/hOlwx52NAy And thanks again to https://t.co/UxRENWlnLL!
boswords.org
Please do some crosswords for science! https://t.co/hOlwx52NAy We're studying how people solve xwords, and we are looking for solvers of all experience levels: from novices to ACPTers. High-quality puzzles generously provided by the wonderful https://t.co/OSzFSiG8oU team.
0
2
8