
Dzmitry Hramyka
@grom_dimon
Followers
248
Following
2K
Media
21
Statuses
164
Co-Founder of @axioma_ai & @0xNeurobro. Building AGI Agents.
Berlin, Germany
Joined August 2018
- u can copy design - u can copy ideas - u can clone data ! But u never can copy real users ! mass adoption already began
Some technical insights from the last months behind Neurodex powered by @0xNeurobro Yesterday, we officially entered a new era of how people act, operate, and follow smart money (aka whales) Every single week our team solves some of the most complex engineering challenges out
0
0
8
Really great meeting @jessepollak & @XenBH The passion around @baseapp is just based. Can’t wait to bring our Neurobros into this journey together
🚨Daily Update Neurobros! This week we met @jessepollak, @XenBH & many of the Base team members here at Token2049 The energy at every Base event we attended was massive - we connected with builders, projects & potential partners all across the ecosystem! Big things are planned
1
0
10
It’s just so easy to know the market nowadays. Before @0xNeurobro, people spent hours on research and now all that brain power is in your pocket. Just incredible
It's so easy to make 20x in this cycle. All you need is an edge like the one I cited in this video. 👇 Chads like @DeSci_Guy and @LeaDrops are already making bags using this edge. I break it down in this
0
2
10
🚨Daily Update Neurobros! The core Neurobro team is right in the middle of #TOKEN2049 & it's been successful so far Today we met the legend @ethermage along with some other Virtuals builders. Over the coming days, we'll be attending many more events, meeting with projects &
47
87
158
What’s more valuable: intelligence or consciousness? When non-conscious but highly intelligent algorithms know us better than we know ourselves, what will happen to politics, society and daily life?
3
0
12
gg
0
0
4
Everyone’s building agents. Few share what breaks when you try to scale. This is one of the rare examples that actually shares this real experience
Awesome AI Agent Frameworks update! The initial post got way more love than we expected, so I went deeper and shared all the lessons we’ve learned at @axioma_ai What scales, breaks, and how we pick the right stack for each project! Link below (as usual) 👇
0
1
7
Stats from today: • Avg HR: 179 (initial plan failed early 😅. and failed quite drastically) • Max HR: 198 • Moving time: 3:41:28 (watch showed marathon distance at 3:39:58) • Avg pace: 5:11 / km
1
0
5
Today I ran the 51st @berlinmarathon and realized one important thing: Business taught me to be patient, pragmatic & tactical. It’s useful, but it also makes you less open to simple life feelings: nothing can simply amaze you At km 39 I was completely exhausted, in pain, ready
5
0
10
The democratization aspect of crypto caused fragmentation hell. Even simple swap can have 50+ variations, all wrapping the same basic contract Now your cookbook for full onchain pain: 1. Build a parser 2. Wait 3. In a few days a new "yet another genius" project ships a new
2
1
9
Current algo loves to push you into a bubble: feed turns biased & way too political If the team actually nails a niche-driven AI-powered timeline - that’d be a massive shift. Excited to see this in action
The algorithm will be purely AI by November, with significant progress along the way. We will open source the algorithm every two weeks or so. By November or certainly December, you will be able to adjust your feed dynamically just by asking Grok.
0
0
6
AI will make thinking cheap. Creating stays expensive. The new moat is write-perm surfaces: blockspace, medicine, liquidity, logistics, law, ... If your product ends at a screen, you’re late. Wire it to execution
0
0
11
Evolution has no goal. Not perfection. Not complexity. Not intelligence. The only rule: what survives - survives The 'purpose' or 'goal' we see in evolution - is just our projection. No gene worship, no most adapted creatures. There's just one law: don’t end & keep moving
0
0
8
Bottom line: leading labs aren’t chasing one universal model anymore. They’re building fleets of specialists, combined together with routing layers. This modular approach opens more efficiency, better performance per compute, and higher adaptability. All this matters with the
2
0
6
The latest @OpenAI model (GPT-5) was quite surprising. At launch: no standout benchmarks, little love on LMArena. But after iterations, it’s become the model people want to use. - Why? - The architecture. GPT-5 isn’t a single monolith: it’s a router orchestrating multiple
1
0
6
Both @deepseek_ai and @Kimi_Moonshot used MoE [ https://t.co/LOXo0bzalt] for their top models. Basic idea, but not basic performance. So what’s the magic in MoE? Main trick is to turn bias into specialization: train experts to focus on certain domains, then use only a handful per
arxiv.org
The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been...
1
0
7
After the LLM boom starting with GPT-3, many assumed scaling up (more parameters, more data) would lead directly to AGI. But releases of larger models (aka. GPT-4.5) didn’t solve core issues. Meanwhile small open-source models improved dramatically the performance(remember
1
0
7
Bookmark 📌 this for future reference In this Thread: • The shift from general to special models • MoE / routing architectures • GPT-5 tweak Let’s begin ↯
1
0
7