hive_echo Profile Banner
echo.hive Profile
echo.hive

@hive_echo

Followers
12K
Following
22K
Media
3K
Statuses
15K

Learning advanced math & physics just because 🟣 Consulting/teaching: https://t.co/30NKgBMNCn 🔴 Open Source: https://t.co/Ejak3wEYJ0

Joined July 2022
Don't wanna be here? Send us removal request.
@hive_echo
echo.hive
15 hours
It learned !!! 🥳 The MoE mouse, powered by its Hebbian learning controller, has learns from homeostatic pressure to let whichever drive is most urgent take the lead so the right network guides it toward the need that matters most in that moment. It also adapts on the fly to
@hive_echo
echo.hive
23 hours
Can the MoE mouse with 3 simple networks regulated by 3 homeostatic pressures manage to look after itself? (open source – link below) the mouse just has three rule-based ring attractor network: 1st -> cheese, 2nd-> water, 3rd -> rest and 3 pressures that rise over time:
4
2
43
@hive_echo
echo.hive
57 minutes
consider becoming a patron to access the source code for this, 500+ other projects, exclusive videos and weekly meetings: https://t.co/yTj6inS6qB
Tweet card summary image
patreon.com
Get more from echohive on Patreon
0
0
1
@hive_echo
echo.hive
57 minutes
MoE mouse survived for 10 hours in a single run! 36800 seconds! I let it ran overnight and learn and it ran for a total of 65 episodes, still running too pretty amazing a simple network like this learned this well and entirely online network is 3 ring attractors(rule based)
@hive_echo
echo.hive
15 hours
It learned !!! 🥳 The MoE mouse, powered by its Hebbian learning controller, has learns from homeostatic pressure to let whichever drive is most urgent take the lead so the right network guides it toward the need that matters most in that moment. It also adapts on the fly to
2
1
5
@hive_echo
echo.hive
2 days
Embodied Neural Cellular Automata bringing together the inspirations from 3 papers an embodied agent: a 3×3 CA body embedded in a 9×9 sense field that updates via local convolutions inside a world with food gradients. (work in progress) The creature’s “brain” is a small
@hive_echo
echo.hive
2 days
Studying another paper: Self-Organizing Neural Networks in Novel Moving Bodies Link to paper in comment
1
4
31
@hive_echo
echo.hive
11 hours
today's studied paper: Why Can't Transformers Learn Multiplication? https://t.co/LZp8VYREAD
0
0
1
@hive_echo
echo.hive
11 hours
Joy of life-long learning day 87... today's paper was so inspiring, didn't expect that. almost started it as a chore 7/20 papers studied: 35% - Today: 5% Manifold learning: 85% - Today: 15% -------- Probability 42% - Today 0% LLM training-Stanford Lecture: 16% - Today: 0%
@hive_echo
echo.hive
2 days
Joy of life-long learning day 86... today I was tired(5hrs of sleep) but still built a project, studied 1 paper and 3 chapters of probability 7/20 papers studied: 35% - Today: 10% Probability 42% - Today 6% -------- LLM training-Stanford Lecture: 16% - Today: 0% Manifold
1
0
8
@hive_echo
echo.hive
12 hours
thought this paper might be boring I was very much wrong! Studying it lead to all kinds of interesting places
0
0
1
@hive_echo
echo.hive
12 hours
0
0
2
@hive_echo
echo.hive
12 hours
Surprising learning from the paper: Heavy regularization forces simpler models. The true algorithm is the simplest explanation of the data. So regularization should push toward the true algorithm. Problems with this assumption (sometimes) - The algorithmic solution may not be
@hive_echo
echo.hive
14 hours
Studying a paper: Why can’t transformers learn multiplication Super interesting paper Studying with gpt-5.1(no-think) and here are its commentary which is soo good! •I find this a strong demonstration of a clear phenomenon: even for a “simple” arithmetic task, transformers
2
0
10
@ZajcGal
Gal Zajc
3 months
tetris in cartesian and polar coordinates, and even on a mobius loop. adjustable grid dimensions, speed of all tetromino motions, combo moves, adjustable everything. made with gemini 2.5 pro, finished with GPT-5, zero human written code, 100% vibe coded.
1
1
16
@hive_echo
echo.hive
13 hours
Emergent Fourier like stuff in a tranformer:
0
0
2
@hive_echo
echo.hive
14 hours
Icot (the succesful method) explained; At the start of training the model sees every step written out. As training goes on those intermediate steps are gradually removed. The idea is that the model internalises the step-by-step reasoning even after the written steps are no
0
0
1
@hive_echo
echo.hive
14 hours
Studying a paper: Why can’t transformers learn multiplication Super interesting paper Studying with gpt-5.1(no-think) and here are its commentary which is soo good! •I find this a strong demonstration of a clear phenomenon: even for a “simple” arithmetic task, transformers
@hive_echo
echo.hive
2 days
Studying a paper: Conformal Transformations for Symmetric Power Transformers This paper is about improving how far self attention can effectively “reach” in long contexts. An interesting thing: I decided to search “what conformal is” and that lead me to “complex analysis”
5
1
19
@hive_echo
echo.hive
14 hours
I lost 8 hours of progress due to collab stopping 😕 I really need to learn to use this cloud stuff better Still continuing from prior checkpoint
0
0
1
@hive_echo
echo.hive
15 hours
Here is the working version:
@hive_echo
echo.hive
15 hours
It learned !!! 🥳 The MoE mouse, powered by its Hebbian learning controller, has learns from homeostatic pressure to let whichever drive is most urgent take the lead so the right network guides it toward the need that matters most in that moment. It also adapts on the fly to
0
0
1
@hive_echo
echo.hive
15 hours
consider becoming a patron to access the source code for this, 500+ other projects, exclusive videos and weekly meetings: https://t.co/yTj6inS6qB
Tweet card summary image
patreon.com
Get more from echohive on Patreon
0
0
1
@hive_echo
echo.hive
15 hours
it worked!!!! 🥳 I can hardly believe it
0
0
2
@hive_echo
echo.hive
15 hours
it would be so amazing if this works!! currently not :(
2
0
1
@hive_echo
echo.hive
16 hours
Looks beautiful and feels like it will learn 🤞
0
0
0