Eric Elmoznino Profile
Eric Elmoznino

@EricElmoznino

Followers
783
Following
103
Media
6
Statuses
104

PhD student at Mila interested in AI, cognitive neuroscience, and consciousness

Montreal, Quebec
Joined January 2017
Don't wanna be here? Send us removal request.
@EricElmoznino
Eric Elmoznino
3 months
Very excited to release a new blog post that formalizes what it means for data to be compositional, and shows how compositionality can exist at multiple scales. Early days, but I think there may be significant implications for AI. Check it out!
ericelmoznino.github.io
What is compositionality? For those of us working in AI or cognitive neuroscience this question can appear easy at first, but becomes increasingly perplexing the more we think about it. We aren’t...
0
10
33
@AtlasKazemian
Atlas Kazemian
6 days
Super excited to share that my Master’s project, “Convolutional architectures are cortex-aligned de novo,” has been published in Nature Machine Intelligence! https://t.co/Zmy1XwymFB w/ @EricElmoznino @michaelfbonner
3
14
33
@patrickbutlin
Patrick Butlin
10 days
New paper on AI consciousness! Here we present the theory-derived indicator method for assessing AI systems for consciousness. Link below.
20
70
331
@EricElmoznino
Eric Elmoznino
2 months
New perspective piece out with @Yoshua_Bengio in Science on AI & consciousness. Current trajectory is that the scientific consensus will increasingly be that AI is conscious (even if it isn’t), with potentially dangerous consequences for humanity. https://t.co/urQhXQrpUO
Tweet card summary image
science.org
The belief that AI is conscious is not without risk
2
4
22
@EricElmoznino
Eric Elmoznino
4 months
Recently went on a podcast to talk about consciousness and AI. Very timely given how common interactions with AI systems are becoming, and how convincingly "human" LLMs can appear to be in our conversations with them. Was a fun conversation - check it out!
0
2
14
@EricElmoznino
Eric Elmoznino
1 year
5/5 In virtue of being mathematically precise, our definition can inspire novel inductive biases for compositional representations in AI that are theoretically principled, rather than ad hoc and purely intuitive.
0
0
11
@EricElmoznino
Eric Elmoznino
1 year
4/5 It’s possible to estimate compositionality as we define it using tools from deep learning, and we validate our definition on synthetic as well as real-world data to make sure that it’s consistent with intuition.
1
0
8
@EricElmoznino
Eric Elmoznino
1 year
3/5 Our definition assigns a number to a representation’s compositionality. It says that compositional representations are (1) expressive and (2) best describable as a simple function of constituent parts. We formalize this with Kolmogorov complexity and optimal compression.
1
0
9
@EricElmoznino
Eric Elmoznino
1 year
2/5 Compositionality is thought to underlie intelligence, but it actually has no formal, mathematical definition in either CogSci or AI. We give one, grounded in algorithmic information theory, and argue why it accounts for & extends intuitions about compositionality.
2
0
10
@EricElmoznino
Eric Elmoznino
1 year
1/5 Very excited to announce our new paper that formally defines compositionality, giving us theoretical insight into what it actually is and helping us build it into AI models. This was work with @tomjiralerspong, @Yoshua_Bengio, and @g_lajoie_.
Tweet card summary image
arxiv.org
Compositionality is believed to be fundamental to intelligence. In humans, it underlies the structure of thought, language, and higher-level reasoning. In AI, compositional representations can...
5
57
284
@EricElmoznino
Eric Elmoznino
1 year
5/5 We back up these theoretical insights with empirical experiments, which together give a normative account of why ICL is so effective, but also explain the shortcomings of current ICL methods and suggest ways forward. Enjoy the paper!
0
0
3
@EricElmoznino
Eric Elmoznino
1 year
4/5 The meta-learning problem looks tough on paper, but it turns out that it’s efficiently solved by models like Transformers trained to predict the next token in a sequence, which are capable of learning at inference-time from context.
1
1
5
@EricElmoznino
Eric Elmoznino
1 year
3/5 We show that finding a learning algorithm that minimizes both training error and model complexity is directly equivalent to meta-learning a learner that optimally compresses datasets through an algorithm called prequential coding.
1
0
4
@EricElmoznino
Eric Elmoznino
1 year
2/5 The goal of ML is generalization. From Occam’s razor, this means that we want simple models that explain the training data. However, most learning algorithms in ML only explicitly try to minimize training error, not model complexity.
1
0
3
@EricElmoznino
Eric Elmoznino
1 year
Introducing our new paper explaining in-context learning through the lens of Occam’s razor, giving a normative account of next-token prediction objectives. This was with @Tom__Marty @tejaskasetty @le0gagn0n @sarthmit @MahanFathi @dhanya_sridhar @g_lajoie_
Tweet card summary image
arxiv.org
A central goal of machine learning is generalization. While the No Free Lunch Theorem states that we cannot obtain theoretical guarantees for generalization without further assumptions, in...
3
24
103
@g_lajoie_
Guillaume Lajoie
1 year
How continuous neural activity learns and supports discrete, symbolic & compositional processes remains an important question for Cog Sci and AI. Here we explore ways to achieve both symbolic and sub-symbolic processing using attractor dynamics.
Tweet card summary image
arxiv.org
Symbolic systems are powerful frameworks for modeling cognitive processes as they encapsulate the rules and relationships fundamental to many aspects of human reasoning and behavior. Central to...
3
33
157
@g_lajoie_
Guillaume Lajoie
1 year
New preprint where we ask if the psychedelic-induced hallucinations can be explained by the role of dendrites in learning mechanisms in the brain. In short: classical psychedellics might hijack physiological gating mechanisms in generative learning.
Tweet card summary image
biorxiv.org
Classical psychedelics induce complex visual hallucinations in humans, generating percepts that are coherent at a low level, but which have surreal, dream-like qualities at a high level. While there...
0
8
32
@sarthmit
Sarthak Mittal
1 year
Excited about new work on ICL comparing current implicit approaches to explicit task latent variable inference using transformers. Joint work with @EricElmoznino, Leo Gagnon, @sangnie, @dhanya_sridhar and @g_lajoie_ Preprint out now at:
1
9
28
@michaelfbonner
Mick Bonner
2 years
Untrained CNNs can predict visual cortex representations way better than you might expect! New preprint led by a brilliant student, @AtlasKazemian
@AtlasKazemian
Atlas Kazemian
2 years
What underlies the emergence of cortex-aligned representations in DNNs? Large-scale pre-training has been a major focus, but we show that certain architectural manipulations can yield high brain similarity even in untrained CNNs. w/@michaelfbonner
0
8
40
@Mila_Quebec
Mila - Institut québécois d'IA
2 years
Congratulations to Mila researchers for their paper "Amortizing intractable inference in large language models", which received an Honorable Mention at #ICLR2024! @edwardjhu @JainMoksh @EricElmoznino @you_kad @g_lajoie_, Yoshua Bengio, Nikolay Malkin @iclr_conf
@iclr_conf
ICLR 2026
2 years
Announcing the #ICLR2024 Outstanding Paper Awards: https://t.co/0k9LJwNE6o Shoutout to the awards committee: @eunsolc, @katjahofmann, @liu_mingyu, @nanjiang_cs, @guennemann, @optiML, @tkipf, @CevherLIONS
0
8
41