Pablo Contreras Kallens Profile
Pablo Contreras Kallens

@pcontrerask

Followers
128
Following
85
Media
5
Statuses
89

Ph. D. candidate, Cornell Psychology in the Cognitive Science of Language lab. Serious account.

Ithaca, NY
Joined September 2020
Don't wanna be here? Send us removal request.
@pcontrerask
Pablo Contreras Kallens
1 year
RT @JeffYoshimi: I've been having a nice time talking to @denizcemonduygu about maps and graphs of philosophical discourse, and thought I'….
0
8
0
@pcontrerask
Pablo Contreras Kallens
1 year
RT @MH_Christiansen: @CSL_Lab alumn @pcontrerask talks about how feedback is crucial for getting large language models to produce more huma….
0
1
0
@pcontrerask
Pablo Contreras Kallens
1 year
RT @TrendsCognSci: Information density as a predictor of communication dynamics. Spotlight by Gary Lupyan (@glupyan), Pablo Contreras Kalle….
0
12
0
@pcontrerask
Pablo Contreras Kallens
2 years
RT @ElmlingerSteven: How do infants learn to produce the consonant sounds of their ambient language? To find out, check out our CogSci proc….
0
6
0
@pcontrerask
Pablo Contreras Kallens
2 years
it's me!.
@MH_Christiansen
Morten H. Christiansen
2 years
Huge congratulations 🥳to .👉Dr.👈 @pcontrerask .who just passed his PhD defense with flying colors!👏.He defended his dissertation in @CornellPsychDpt .THE COMPUTATIONAL BRIDGE:.INTERFACING THEORY AND DATA IN COGNITIVE SCIENCE.Follow @pcontrerask to see the papers
Tweet media one
1
0
16
@pcontrerask
Pablo Contreras Kallens
2 years
RT @JeffYoshimi: I'm delighted to announce the publication of our free, open access book, "Horizons of Phenomenology", a collection of essa….
0
39
0
@pcontrerask
Pablo Contreras Kallens
2 years
RT @JeffYoshimi: Just realized I didn't include the link. Here it is!
0
4
0
@pcontrerask
Pablo Contreras Kallens
2 years
RT @Benambridge: In sum, the ideas that children (a) have innate syntactic categories and (b) NEED them because they can't construct them v….
0
1
0
@pcontrerask
Pablo Contreras Kallens
2 years
RT @Benambridge: I've seen lots of threads about large language models (LLMs) and their implications for language acquisition BUT not many….
0
13
0
@pcontrerask
Pablo Contreras Kallens
2 years
RT @ross_dkm: For the Friday evening crowd still mulling over Chomsky’s op-ed, here’s a key passage from our recent letter in @cogsci_soc.….
0
5
0
@pcontrerask
Pablo Contreras Kallens
2 years
RT @raphaelmilliere: Another day, another opinion essay about ChatGPT in the @nytimes. This time, Noam Chomsky and colleagues weigh in on t….
Tweet card summary image
nytimes.com
The most prominent strain of A.I. encodes a flawed conception of language and knowledge.
0
225
0
@pcontrerask
Pablo Contreras Kallens
2 years
So. Let's grant that GPT and its ilk are unreliable ethically problematic parrots. My hunch is that 20th Century theories of language would have been very different if parrots were half as good as GPT at summarizing the plot of Dragon Ball in the style of a Shakespearean sonnet.
0
0
3
@pcontrerask
Pablo Contreras Kallens
2 years
Moreover, language has culturally evolved to fit the cognitive niche of humans, not LLMs. It is immediately assumed that the situation of a child is more precarious than an the LLM's. But this is not as obvious as a first hunch might tell you. It is, again, an empirical question.
1
2
1
@pcontrerask
Pablo Contreras Kallens
2 years
But in another sense, this is a misleading characterization. Sure, LLMs get much more (although not INFINITELY more) words than a child. But a child is embedded in a natural and social world, interacts with it, and has a much richer experience and cognitive machinery than LLMs.
1
0
2
@pcontrerask
Pablo Contreras Kallens
2 years
There's also the point of the amount of training and size of LLMs. More work needs to be done on how brittle this performance is, how much their similarities with humans are dependent on their size and gigantic input, or whether they can simulate developmental trajectories [. ].
1
0
1
@pcontrerask
Pablo Contreras Kallens
2 years
There's a couple of nuances. How good of a model of human language learning LLMs are is still to be fully determined. However, this is an empirical question, and a lot of work keeps showing that the similarities are there. They can't be just waived away based on a hunch.
1
0
1
@pcontrerask
Pablo Contreras Kallens
2 years
Replies that appeal to e.g. competence v/s performance, language being more than tracking statistics, etc., are assuming what they are supposed to show. This whole theoretical and conceptual apparatus was postulated because what these models can do was supposed to be impossible.
1
0
1
@pcontrerask
Pablo Contreras Kallens
2 years
A historical example. In what is taken as a "seminal" takedown of connectionism, Pinker and Prince (1988) contemplate the possibility that NNs could "master the past tense", but that was merely a "vague hope" (183). Well, here we are, with a model that can do much more than that.
Tweet media one
Tweet media two
1
0
2
@pcontrerask
Pablo Contreras Kallens
2 years
This contradicts the central core of the PoS. It is a little bit jarring to read people so casually dismissing something they claimed was literally impossible, i.e. a full-fledged, statistical learning-based model of grammar.
1
0
4