echo.hive
@hive_echo
Followers
12K
Following
22K
Media
3K
Statuses
15K
Learning advanced math & physics just because 🟣 Consulting/teaching: https://t.co/30NKgBMNCn 🔴 Open Source: https://t.co/Ejak3wEYJ0
Joined July 2022
It learned !!! 🥳 The MoE mouse, powered by its Hebbian learning controller, has learns from homeostatic pressure to let whichever drive is most urgent take the lead so the right network guides it toward the need that matters most in that moment. It also adapts on the fly to
Can the MoE mouse with 3 simple networks regulated by 3 homeostatic pressures manage to look after itself? (open source – link below) the mouse just has three rule-based ring attractor network: 1st -> cheese, 2nd-> water, 3rd -> rest and 3 pressures that rise over time:
4
2
43
consider becoming a patron to access the source code for this, 500+ other projects, exclusive videos and weekly meetings: https://t.co/yTj6inS6qB
patreon.com
Get more from echohive on Patreon
0
0
1
MoE mouse survived for 10 hours in a single run! 36800 seconds! I let it ran overnight and learn and it ran for a total of 65 episodes, still running too pretty amazing a simple network like this learned this well and entirely online network is 3 ring attractors(rule based)
It learned !!! 🥳 The MoE mouse, powered by its Hebbian learning controller, has learns from homeostatic pressure to let whichever drive is most urgent take the lead so the right network guides it toward the need that matters most in that moment. It also adapts on the fly to
2
1
5
Embodied Neural Cellular Automata bringing together the inspirations from 3 papers an embodied agent: a 3×3 CA body embedded in a 9×9 sense field that updates via local convolutions inside a world with food gradients. (work in progress) The creature’s “brain” is a small
Studying another paper: Self-Organizing Neural Networks in Novel Moving Bodies Link to paper in comment
1
4
31
today's studied paper: Why Can't Transformers Learn Multiplication? https://t.co/LZp8VYREAD
0
0
1
Joy of life-long learning day 87... today's paper was so inspiring, didn't expect that. almost started it as a chore 7/20 papers studied: 35% - Today: 5% Manifold learning: 85% - Today: 15% -------- Probability 42% - Today 0% LLM training-Stanford Lecture: 16% - Today: 0%
Joy of life-long learning day 86... today I was tired(5hrs of sleep) but still built a project, studied 1 paper and 3 chapters of probability 7/20 papers studied: 35% - Today: 10% Probability 42% - Today 6% -------- LLM training-Stanford Lecture: 16% - Today: 0% Manifold
1
0
8
thought this paper might be boring I was very much wrong! Studying it lead to all kinds of interesting places
0
0
1
Surprising learning from the paper: Heavy regularization forces simpler models. The true algorithm is the simplest explanation of the data. So regularization should push toward the true algorithm. Problems with this assumption (sometimes) - The algorithmic solution may not be
Studying a paper: Why can’t transformers learn multiplication Super interesting paper Studying with gpt-5.1(no-think) and here are its commentary which is soo good! •I find this a strong demonstration of a clear phenomenon: even for a “simple” arithmetic task, transformers
2
0
10
tetris in cartesian and polar coordinates, and even on a mobius loop. adjustable grid dimensions, speed of all tetromino motions, combo moves, adjustable everything. made with gemini 2.5 pro, finished with GPT-5, zero human written code, 100% vibe coded.
1
1
16
Icot (the succesful method) explained; At the start of training the model sees every step written out. As training goes on those intermediate steps are gradually removed. The idea is that the model internalises the step-by-step reasoning even after the written steps are no
0
0
1
Studying a paper: Why can’t transformers learn multiplication Super interesting paper Studying with gpt-5.1(no-think) and here are its commentary which is soo good! •I find this a strong demonstration of a clear phenomenon: even for a “simple” arithmetic task, transformers
Studying a paper: Conformal Transformations for Symmetric Power Transformers This paper is about improving how far self attention can effectively “reach” in long contexts. An interesting thing: I decided to search “what conformal is” and that lead me to “complex analysis”
5
1
19
I lost 8 hours of progress due to collab stopping 😕 I really need to learn to use this cloud stuff better Still continuing from prior checkpoint
0
0
1
Here is the working version:
It learned !!! 🥳 The MoE mouse, powered by its Hebbian learning controller, has learns from homeostatic pressure to let whichever drive is most urgent take the lead so the right network guides it toward the need that matters most in that moment. It also adapts on the fly to
0
0
1
consider becoming a patron to access the source code for this, 500+ other projects, exclusive videos and weekly meetings: https://t.co/yTj6inS6qB
patreon.com
Get more from echohive on Patreon
0
0
1