DistStateAndMe Profile Banner
Distributed State Profile
Distributed State

@DistStateAndMe

Followers
3K
Following
28K
Media
716
Statuses
9K

cult leader/ exit liquidity / community mod @covenant_ai

subnet (3/39/81)
Joined April 2014
Don't wanna be here? Send us removal request.
@DistStateAndMe
Distributed State
3 months
If i spent the rest of my life building on @opentensor , it would be a life well lived. https://t.co/WcEHZaqkhY Our sacred pact. Bittensor will claim its place in history
Tweet card summary image
covenant.ai
One Covenant, Many Orders.
11
29
120
@KeithSingery
Keith Singery
2 days
We made it?
@coingecko
CoinGecko
2 days
Most Popular Blockchain Ecosystems in 2025 By Mindshare 1. Solana - 26.79% 2. Base - 13.94% 3. Ethereum - 13.43% 4. Sui - 11.77% 5. BNB Chain - 9.05% 6. XRP Ledger - 4.68% 7. Sonic - 2.29% 8. Cardano - 1.92% 9. Bittensor - 1.91% 10. Hyperliquid - 1.57% 11. TON - 1.23% 12.
4
3
17
@casper_hansen_
Casper Hansen
2 days
Nvidia is fast becoming open-source kings, they actually pre-trained, mid-trained, post-trained, and got better results than Qwen3 which is impressive! Now we only need to bully them about nvidia-open-model-license and multimodality...
11
23
237
@covenant_ai
covenant
3 days
Missed this week's TGIF Space? Catch up on everything happening across Covenant AI. This week: Basilica's rebuilt incentive mechanism, why decentralized compute economics are broken (and how we're fixing it), @NeurIPSConf recap, and a frank discussion on the value of patient
0
7
11
@jon_durbin
Jon Durbin
3 days
Don't look now, but this is actually quite huge šŸ‘€ https://t.co/M3HIfWLlZX Chutes can now serve as an IDP. Developers can now have users "Login with chutes" and all of the compute/inference costs pass long directly to that user and/or their subscription on chutes. No more
Tweet card summary image
github.com
Make chutes an IDP with oauth2 style access/refresh tokens, configurable scopes/visibility, etc.
7
27
148
@taoapp_
Tao.App
3 days
Why is our halving countdown the most accurate & always in sync? tldr: we pull each piece directly from the chain and recalculate every 12 sec. (more details šŸ‘‡šŸ§µ) Accuracy matters when every block counts. See for yourself: https://t.co/rFQqw6XnjD
5
23
99
@DistStateAndMe
Distributed State
3 days
@Grayscale
Grayscale
3 days
The $TAO halving approaches.
1
1
22
@DistStateAndMe
Distributed State
3 days
Tools as Types - The future of agentic workflows on @basilic_ai
2
3
12
@DreadBong0
DR$4D BONGO
3 days
Just 1 day to go until the first ever #Bittensor halvening Only 7,558 $TAO left to be emitted before we enter a new phase of scarcity $TAO
23
58
389
@joellidin
Joel Lidin
3 days
@Jsevillamol @tplr_ai Hi Jamie, our 72B run has so far processed 750B tokens and we are targeting ~1T for completion. Plans for the start of the next large scale run are still in development, as the current run wraps up we expect to be doing a series of smaller scale tests in public runs to evaluate
0
2
2
@s3nhxx
s3nh
3 days
Im just diggin the hole on Twitch. which is equivalent of minimizing the loss function somehow. saddle point speedrun. https://t.co/CAZXBLn8j1
Tweet card summary image
twitch.tv
Tekken
0
1
9
@nathanbarrydev
Nathan Barry
4 days
Language diffusion models are faster at writing code (and other structured text). The other day, I had the thought that language diffusion models sort-of have something analogous to speculative decoding built-in by default. In speculative decoding for autoregressive models, we
@nathanbarrydev
Nathan Barry
1 month
Added confidence-aware parallel decoding to my tiny text diffusion model! Before, we had ā€œscheduled iterative refinementā€ for decoding, where it would go through a masking schedule and resample all positions at each iteration. This didn’t take into account token probabilities
5
13
140
@huybery
Binyuan Hui
3 days
Coding agents are moving beyond vibe coding toward real-world, enterprise software engineering. Attention is shifting from frontend demos and isolated bug fixes to refactoring large codebases, improving the efficiency of complex systems, and handling long-horizon tasks. What
60
40
625
@DistStateAndMe
Distributed State
3 days
Banger
@NoahChrein
āˆž-modal
3 days
See you at thesis draft https://t.co/UGhaZSy7IE
0
0
16
@NoahChrein
āˆž-modal
4 days
The only meaningful debate is for the sake of driving towards principled formalizations, never ego. Debate with kindness and grace, as emotions and deep struggles are real, though unseen. Debate knowing the social levers narcissists can pull to gain power over you.
1
1
14
@ridges_ai
Ridges AI | SN62
4 days
This week šŸ‘€
15
18
152
@jeffreyhuber
Jeff Huber
4 days
doing well on AI benchmarks is like having a great one rep max - impressive but wholly unrelated to daily utility
0
2
24
@DistStateAndMe
Distributed State
4 days
This is your scheduled reminder that @tplr_ai's Covenant 72B is the largest , permissionless pre-training run by a wide margin. Thank you for your attention to this matter. https://t.co/aA25GOcEVt
5
7
48
@DistStateAndMe
Distributed State
4 days
Would love to try this on SpareLoco:
Tweet card summary image
arxiv.org
Communication-efficient distributed training algorithms have received considerable interest recently due to their benefits for training Large Language Models (LLMs) in bandwidth-constrained...
@TheTuringPost
Ksenia_TuringPost
4 days
DeepCode – a multi-agent framework that turns research papers into full codebases It manages information flow so large, detailed papers can be converted into production-quality code despite LLM context limits. It does this through: - Blueprint distillation – compressing papers
0
0
4
@NandoDF
Nando de Freitas
5 days
Reflection on @dwarkesh_sp's reflections on his interview with Rich Sutton, and why LLMs are exciting because they are Skinnerian, Popperian and Gregorian creatures. Minds with finite capacity cannot adapt forever without having to forget previous knowledge. This is true of
Tweet card summary image
dwarkesh.com
Watch now (66 mins) | LLMs aren’t Bitter-Lesson-pilled
11
26
175