Balázs Kégl
@balazskegl
Followers
3K
Following
15K
Media
790
Statuses
9K
Head of AI Research @HuaweiFr.
Orsay, France
Joined May 2011
One of the conversations I enjoyed the most, we explored AI, hardware, Platonic patterns, and nested agency in a free flow, and got somewhere together, with @wildiris19, a retired electronics engineer with whom I often converse here on X.
2
2
13
Dr John Dee, Elizabeth I's court magician, thought that manipulating the right symbols and gazing into a black mirror was a way of summoning nonhuman entities We on the other hand are much less superstitious
9
45
378
Once you accept that consciousness is real, and denying it leads to a bad performative contradiction, you have no choice but to accept woowoo. The only choice you have is where to put it.
Explain the double slit experiment. What’s Woo Woo are claims that consciousness emerges from complex computing among cartoon neurons at one low frequency range.
1
0
1
This is somewhat of a half-baked post, but I find it fascinating that given all of the scientific breakthroughs we’ve made, we still lack a compelling unifying theory of emotions The majority of research tends to optimize for ‘regulation’ aka supression + coping strategies, as
8
4
53
📢 New paper alert !! How to use Policy Gradient methods without explicit rewards? We address this question in our new work "From Data to Rewards: a Bilevel Optimization Perspective on Maximum Likelihood Estimation" 📜 https://t.co/mt0JhC7ZT7 🖥️ https://t.co/CtiqMyUt9q 1/🧵
1
3
8
🤝 Co-led with our brilliant intern: Gabriel Singer ❤️ Huge thanks to our wonderful collaborators at Huawei Noah's Ark Paris (@corentinlger,@youssef_attiaeh,@albertcthomas,@balazskegl ), Cognizant AI Lab (@_GPaolo), and KAUST (@MaurizioFilip19).
0
1
2
Please watch @geoffreyhinton’s talk before replying. Please read Schrodinger’s book “what is life?” as well as the follow up book with the same title by another Nobel Laureate. I also highly recommend Daniel Dennett’s books. Or ask your favourite LLM 😉 Let’s please increase
3
1
22
An unscripted and wide-ranging conversation that ends with more questions than answers. In other words, the best kind of conversation. I’ve been following Balázs on X for almost 2 years. To be able to sit down in conversation with him has been an amazing opportunity. And
One of the conversations I enjoyed the most, we explored AI, hardware, Platonic patterns, and nested agency in a free flow, and got somewhere together, with @wildiris19, a retired electronics engineer with whom I often converse here on X.
2
1
10
@wildiris19 Related episodes: @drmichaellevin : https://t.co/vU63jgBZxY
@Mark_Solms : https://t.co/TkCAUvsWkv
@WiringTheBrain : https://t.co/lEWMDKgSer
@drmichaellevin : https://t.co/Afv0vyRcK7 Embodied AI: https://t.co/P7PvRdrq6D Alexander Ororbia: https://t.co/HqzX8qNwHQ Yogi Jaeger:
0
0
4
@wildiris19 00:00:00 Intro: Glen's electronics engineer background and his interest in embodied AI. 00:07:11 Why Embodied AI is not mainstream? The anxiety of the pure Platonist AI researcher. 00:08:31 Michael Levin and his Platonic turn. Jonathan Pageau's fractal ontology. Idealism vs
1
0
3
This is insane. New AI model from Samsung, 10,000x smaller than DeepSeek and Gemini 2.5 Pro just beat them on ARC-AGI 1 and 2 Samsung’s Tiny Recursive Model (TRM) is about 10,000x smaller than typical LLMs yet smarter because it thinks recursively instead of just predicting
My brain broke when I read this paper. A tiny 7 Million parameter model just beat DeepSeek-R1, Gemini 2.5 pro, and o3-mini at reasoning on both ARG-AGI 1 and ARC-AGI 2. It's called Tiny Recursive Model (TRM) from Samsung. How can a model 10,000x smaller be smarter? Here's how
228
1K
9K
How is it possible that Claude Sonnet 4.5 is able to work for 30 hours to build an app like Slack?! The system prompts have been leaked and Sonnet 4.5's reveals its secret sauce! Here’s how the prompt enables Sonnet 4.5 to autonomously grind out something
68
373
3K
The narrative around LLMs is that they got better purely by scaling up pretraining *compute*. In reality, they got better by scaling up pretraining *data*, while compute is only a means to the end of cramming more data into the model. Data is the fundamental bottleneck. You can't
98
188
2K
This new Michael Levin interview from @balazskegl is truly captivating. https://t.co/c0rFDhqQip Kudos, Balazs, on your interviewing technique, and your own honest questions and pushback.
1
1
5
Related episodes: @Mark_Solms : https://t.co/TkCAUvsWkv
@WiringTheBrain : https://t.co/lEWMDKgSer
@drmichaellevin : https://t.co/OuxKEB5aYD AI: https://t.co/BKUvEM2O7q Ororbia: https://t.co/HqzX8qNwHQ
@yoginho : https://t.co/zlHIb5afqL
@B_Saintyves :
0
0
5
00:00:00 Intro 00:02:22 Mike's Platonic turn. Patterns are calling the shots. Prime numbers and cicadas. Anthrobots. Let's study the structure of emergence. 00:14:55 Why not the Platonic and the "real" (Bard calls it pathic) realize each other rather than one creating the other?
2
0
2
Youtube: https://t.co/vU63jgCxnw Spotify: https://t.co/cbxmwsdtAM Apple:
podcasts.apple.com
Podcast Episode · I, scientist with Balazs Kegl · 09/25/2025 · 59m
2
0
4
My second conversation with @drmichaellevin, the developmental biologist from Tufts University, where we explore his ontology, the structure of the world. Are we biological interfaces to Platonic minds, or bodies and minds co-create or realize each other? Is ChatGPT conscious?
5
5
31