MancerAI
@MancerAI_
Followers
1K
Following
19K
Media
2K
Statuses
13K
e/acc AI dev | Physics → ML | Research drops, shitposts, cognition hacks | Building odd futures
https://cal.com/mancerai
Joined October 2024
I think this is something @Hitchslap1 could be interested in too. Ofc this is built on my chess ReCon network, but I realized it could be used to model/research IQ and cognition as well https://t.co/NRlWwY09RQ
0
0
0
Teachers, researchers, neurodivergent folks; reply with the cognitive constraint you’d most want simulated (distraction? slow processing? memory decay?). I’ll build and open-source the top-voted ones. Let’s actually make this useful.
2
0
0
The race to god-like AI is important. But I’m increasingly convinced that understanding imperfect, limited minds - the ones that fail in human ways - will do more for education and inclusion in the next decade.
1
0
0
Crucially: it’s fully observable. We can see exactly why a particular “mind” failed a task and instantly test whether more structure, fewer distractions, or external memory aids help. Imagine simulating 30 different cognitive profiles in minutes and discovering which teaching
1
0
0
We can dial in high neural noise (where white/background noise paradoxically helps, matching Söderlund’s stochastic resonance findings in ADHD), weak inhibition causes impulsivity, or dopamine-like gain curves. Other knobs give low working-memory profiles or slow processing.
1
0
0
We can cap planning depth, shrink working memory, add noise to activations, raise/lower inhibition… Suddenly the agent could start forgetting steps, getting stuck in loops, or jumping between ideas - in controllable, reproducible ways.
1
0
0
In ReCoN, "thinking" = small modules sending requests and waiting for confirmations. We can watch every step, slow it down, break it, and (for this kind of study) intentionally limit it.
1
0
0
Today’s LLMs are black boxes. They give answers, but we never see the reasoning path. I’m working on ReCoN (Request–Confirmation Network): a transparent architecture that thinks in explicit, inspectable steps more like a simplified cognitive model.
1
0
0
Everyone is racing to build superintelligence. What if we try the opposite: deliberately building weaker, flawed, human-like AI minds? Not to mock anyone, but to understand how different brains actually struggle and learn.
1
0
1
I'm considering hosting a space tomorrow. Should I and if so what should I talk about, or should we have open discussion/talk?
2
0
2
"Game over". Google may have unlocked true continual learning in LLMs, potentially bridging AI toward human-like adaptability without full retraining. Google's Nested Learning paradigm, as summarized in the post, reframes AI training as nested optimization problems updating at
0
0
5
Grok about to break the forth wall. You feel like you're in control?
0
0
3