Pablo Contreras Kallens Profile
Pablo Contreras Kallens

@pcontrerask

Followers
127
Following
85
Media
5
Statuses
89

Ph. D. candidate, Cornell Psychology in the Cognitive Science of Language lab. Serious account.

Ithaca, NY
Joined September 2020
Don't wanna be here? Send us removal request.
@JeffYoshimi
Jeff Yoshimi
1 year
I've been having a nice time talking to @denizcemonduygu about maps and graphs of philosophical discourse, and thought I'd repost my bibliometric map of the phenomenology literature. Dots are authors, links are citations, and colors are clusters.
3
8
83
@MH_Christiansen
Morten H. Christiansen
1 year
@CSL_Lab alumn @pcontrerask talks about how feedback is crucial for getting large language models to produce more human-like language output, such as making similar agreement errors and being sensitive to subtle semantic distinctions
1
1
5
@TrendsCognSci
Trends in Cognitive Sciences
2 years
Information density as a predictor of communication dynamics Spotlight by Gary Lupyan (@glupyan), Pablo Contreras Kallens (@pcontrerask), & Rick Dale on recent @NatureHumanBehav work by Pete Aceves (@peteaceves) & James Evans (@profjamesevans) https://t.co/1Ol3agjOR5
0
12
54
@ElmlingerSteven
Steven Elmlinger
2 years
How do infants learn to produce the consonant sounds of their ambient language? To find out, check out our CogSci proceedings paper “Statistical learning or phonological universals? Ambient language statistics guide consonant acquisition in four languages” A 🧵: /1
2
6
28
@pcontrerask
Pablo Contreras Kallens
2 years
it's me!
@MH_Christiansen
Morten H. Christiansen
2 years
Huge congratulations 🥳to 👉Dr.👈 @pcontrerask who just passed his PhD defense with flying colors!👏 He defended his dissertation in @CornellPsychDpt THE COMPUTATIONAL BRIDGE: INTERFACING THEORY AND DATA IN COGNITIVE SCIENCE Follow @pcontrerask to see the papers
1
0
15
@JeffYoshimi
Jeff Yoshimi
3 years
I'm delighted to announce the publication of our free, open access book, "Horizons of Phenomenology", a collection of essays on the state of the field. A brief thread about the book, and the long and ultimately victorious struggle to publish it open access. 1/
10
39
139
@JeffYoshimi
Jeff Yoshimi
3 years
Just realized I didn't include the link. Here it is!
@JeffYoshimi
Jeff Yoshimi
3 years
I'm delighted to announce the publication of our free, open access book, "Horizons of Phenomenology", a collection of essays on the state of the field. A brief thread about the book, and the long and ultimately victorious struggle to publish it open access. 1/
1
4
18
@Benambridge
Ben Ambridge
3 years
In sum, the ideas that children (a) have innate syntactic categories and (b) NEED them because they can't construct them via distributional analyses alone are NOT straw-men but real and influential proposals in the child language literature 7/n
1
1
16
@Benambridge
Ben Ambridge
3 years
I've seen lots of threads about large language models (LLMs) and their implications for language acquisition BUT not many threads by language-acquisition specialists. So here's my two cents on how LLMs undermine SOME SPECIFIC PROPOSALS for acquisition of syntactic categories 1/n
4
13
83
@ross_dkm
Ross
3 years
For the Friday evening crowd still mulling over Chomsky’s op-ed, here’s a key passage from our recent letter in @cogsci_soc. There are absolutely some things LLMs cannot do. But what they can do, they do very well - and that demands attention. https://t.co/TFWExGgHVj
@pcontrerask
Pablo Contreras Kallens
3 years
If you are unsatisfied by Chomsky et al.'s painfully petrified op-ed, boy do I have a read for you! It wasn't planned, but CogSci just published a letter by @MH_Christiansen , @ross_dkm and I arguing quite literally the opposite, at least re:language. https://t.co/fOW788uCyL 🧵
0
5
11
@raphaelmilliere
Raphaël Millière
3 years
Another day, another opinion essay about ChatGPT in the @nytimes. This time, Noam Chomsky and colleagues weigh in on the shortcomings of language models. Unfortunately, this is not the nuanced discussion one could have hoped for. 🧵 1/ https://t.co/nEUNoxUcbY
Tweet card summary image
nytimes.com
The most prominent strain of A.I. encodes a flawed conception of language and knowledge.
30
224
1K
@pcontrerask
Pablo Contreras Kallens
3 years
So. Let's grant that GPT and its ilk are unreliable ethically problematic parrots. My hunch is that 20th Century theories of language would have been very different if parrots were half as good as GPT at summarizing the plot of Dragon Ball in the style of a Shakespearean sonnet.
0
0
3
@pcontrerask
Pablo Contreras Kallens
3 years
Moreover, language has culturally evolved to fit the cognitive niche of humans, not LLMs. It is immediately assumed that the situation of a child is more precarious than an the LLM's. But this is not as obvious as a first hunch might tell you. It is, again, an empirical question.
1
2
1
@pcontrerask
Pablo Contreras Kallens
3 years
But in another sense, this is a misleading characterization. Sure, LLMs get much more (although not INFINITELY more) words than a child. But a child is embedded in a natural and social world, interacts with it, and has a much richer experience and cognitive machinery than LLMs.
1
0
2
@pcontrerask
Pablo Contreras Kallens
3 years
There's also the point of the amount of training and size of LLMs. More work needs to be done on how brittle this performance is, how much their similarities with humans are dependent on their size and gigantic input, or whether they can simulate developmental trajectories [...]
1
0
1
@pcontrerask
Pablo Contreras Kallens
3 years
There's a couple of nuances. How good of a model of human language learning LLMs are is still to be fully determined. However, this is an empirical question, and a lot of work keeps showing that the similarities are there. They can't be just waived away based on a hunch.
1
0
1
@pcontrerask
Pablo Contreras Kallens
3 years
Replies that appeal to e.g. competence v/s performance, language being more than tracking statistics, etc., are assuming what they are supposed to show. This whole theoretical and conceptual apparatus was postulated because what these models can do was supposed to be impossible.
1
0
1
@pcontrerask
Pablo Contreras Kallens
3 years
A historical example. In what is taken as a "seminal" takedown of connectionism, Pinker and Prince (1988) contemplate the possibility that NNs could "master the past tense", but that was merely a "vague hope" (183). Well, here we are, with a model that can do much more than that.
1
0
2
@pcontrerask
Pablo Contreras Kallens
3 years
This contradicts the central core of the PoS. It is a little bit jarring to read people so casually dismissing something they claimed was literally impossible, i.e. a full-fledged, statistical learning-based model of grammar.
1
0
4