Gradient_HQ Profile Banner
Gradient Profile
Gradient

@Gradient_HQ

Followers
824K
Following
245
Media
55
Statuses
202

Building the world’s first fully distributed AI runtime on @solana. Try Parallax: https://t.co/V0Ft85cndH

Joined May 2024
Don't wanna be here? Send us removal request.
@Gradient_HQ
Gradient
2 days
Intelligence has been locked in walled gardens. Today, we’re opening the gates. Parallax now runs in Hybrid mode, with Macs and GPUs serving large models together in a truly distributed framework.
Tweet media one
154
189
951
@Gradient_HQ
Gradient
15 hours
RT @EricY_me: Had a blast speaking at @BerkeleyRDI and @StanfordSBA about our work on decentralized AI. Grateful for the brilliant minds….
0
13
0
@Gradient_HQ
Gradient
2 days
RT @EricY_me: People always ask me, what’s the point of having consumer-level GPUs host those models, or even doing decentralized AI genera….
0
20
0
@Gradient_HQ
Gradient
2 days
RT @yuangaoyg: One model. Every device. Zero silos.
0
19
0
@Gradient_HQ
Gradient
2 days
Parallax shows distributed inference doesn’t need more centralized data centers. It works with the everyday devices we already have. We're building a heterogeneous, peer-to-peer foundation for collective intelligence that anyone can access, contribute to, and build on. Read the.
13
37
146
@Gradient_HQ
Gradient
2 days
Distributed Inference, Now in Hybrid. Try it now:
18
27
131
@Gradient_HQ
Gradient
2 days
Fast. Even in Hybrid. -3.1× faster E2E and 5.3× lower latency on 72B vs SOTA. -235B performance matches 72B, proving clean scalability. -Both models served on a hybrid mesh of RTX 5090s + Mac M4 Pros—a first for heterogeneous, distributed inference.
1
4
48
@Gradient_HQ
Gradient
2 days
Heterogeneous Hardware, Unified Power. Parallax integrates everyday Macs and GPUs with:.-Scheduler: hardware-agnostic, allocates model layers, routes requests. -Executor: runtime for per-device batching & P2P comms, and model runners for GPU (CUDA) and Mac (MLX+Metal).
Tweet media one
2
4
51
@Gradient_HQ
Gradient
2 days
The Key to Distributed Systems: Parallelism. Parallax shards large models into pipeline stages, directly mapped to nodes across a p2p network. Hidden states flow between Macs and GPUs without a central coordinator, forming a globally distributed, self-healing inference fabric.
1
5
55
@Gradient_HQ
Gradient
4 days
We’ll close out SBC week with a mixer alongside @StanfordCrypto & @openmind_agi. An evening of good vibes with fellow builders, researchers, and investors shaping the decentralized future. Join here:.
Tweet card summary image
lu.ma
Come around and hang with us for a fun evening along with fellow builders and investors attending SBC25. About the Hosts Stanford Blockchain Club is…
14
38
161
@Gradient_HQ
Gradient
4 days
Earlier on 3 Aug, Eric will also join a morning panel on Blockchain × AI at BASS SBC 2025 by @StanfordSBA, diving into how decentralized infra reshapes intelligent applications. Register here:.
Tweet card summary image
lu.ma
We are excited to share the upcoming Blockchain Application Stanford Summit on August 3rd at Berkeley during SBC. This event is made possible by the generous…
10
25
160
@Gradient_HQ
Gradient
4 days
Big week ahead at SBC 25. Our cofounder @EricY_me will deliver a keynote at the Summit on Decentralization and AI 2025, hosted by @BerkeleyRDI, sharing how Gradient is scaling inference beyond the cloud.
Tweet media one
108
148
751
@Gradient_HQ
Gradient
5 days
Can't wait to see you at SBC 2025! . More updates coming.
@StanfordCrypto
Stanford Blockchain Club
5 days
Rolling out @StanfordCrypto X @openmind_agi X @Gradient_HQ SBC Mixer. Time: Wednesday, Aug 6, 3:00 PM - 6:00 PM PDT.Location: Edge & Node House of Web3, Building 103, 103 Montgomery St, San Francisco, CA 94129. Come hang with us, RSVP with luma below!.
126
146
965
@Gradient_HQ
Gradient
10 days
We love your excitement for our products, don't let scammers exploit it. Always:.✅ Check URLs character by character.✅ Verify the full address of email senders.✅ Follow our official X for trusted updates. Your passion for distributed intelligence matters. Protect it.
231
277
2K
@Gradient_HQ
Gradient
14 days
The secret behind Parallax’s performance lies in key server-grade optimizations:. – Continuous batching: dynamically groups requests to maximize hardware utilization and throughput. – Paged KV-Cache: block-based design prevents memory fragmentation, handles thousands of.
@Gradient_HQ
Gradient
2 months
Compared to Petals (BitTorrent-style serving), Parallax running Qwen2.5-72B on 2× RTX 5090s achieved:. – 3.1× lower end-to-end latency, 5.3× faster inter-token latency.– 2.9× faster time-to-first-token, 3.1× higher I/O throughput. Results were consistent and showed great.
199
192
1K
@Gradient_HQ
Gradient
17 days
RT @Gradient_HQ: Ever wondered how our chatbot replies in seconds without a central server?. It runs on Parallax’s Swarm: a fully decentral….
0
355
0
@Gradient_HQ
Gradient
17 days
RT @CipherResearchx: Gradient recently launched 2 game-changing technologies for decentralized AI:. • Parallax - distributed inference engi….
0
48
0
@Gradient_HQ
Gradient
20 days
RT @Gradient_HQ: If you haven’t tried it.
0
252
0
@Gradient_HQ
Gradient
21 days
Seeding.
343
346
2K
@Gradient_HQ
Gradient
25 days
Ever wondered how our chatbot replies in seconds without a central server?. It runs on Parallax’s Swarm: a fully decentralized mesh where your prompt is tokenized, segmented, and routed across nodes holding model shards. Each node executes its assigned layers of the LLM, passing
Tweet media one
272
355
2K