
Jacob Zhao
@0xjacobzhao
Followers
2K
Following
283
Media
61
Statuses
473
Crypto x AI | ex-@ArweaveSCP @Mirana @OKX_Ventures @Indodax | ENTJ/INTJ
decentraland
Joined September 2020
Touched down in Cannes for #ETHCC 🇫🇷.Sunshine, sea breeze, and smart contracts. Here to explore what’s next in AI × Crypto and let’s connect 🤝
0
0
7
@NousResearch @PrimeIntellect @PluralisHQ @flock_io @bagelopenai @JoinPond @gensynai 9️⃣ Catch up on the full series:.🧠 Day 1: Paradigms → 🧩 Day 2: Task Fit → 🧪 Day 3: Prime Intellect → 🧬 Day 4: Pluralis → ⚙️ Day 5: Gensyn → 🌌 Day 6:.
🧵 The Holy Grail of Crypto AI (Day 5/10).@gensynai : Verifiable Execution for Decentralized AI Training.Forget just open models—Gensyn is building the execution layer for training-as-mining, where compute is permissionless, results are verifiable, and incentives are.
1
0
1
@NousResearch @PrimeIntellect @PluralisHQ @flock_io @bagelopenai @JoinPond @gensynai 8️⃣ Philosophy: It's Not Just Infrastructure.Decentralized training isn’t just technical. It’s a value statement:.• On open access.• On verifiability.• On economic inclusion.• On censorship resistance.It’s the base layer for globally coordinated AI. One day we’ll look back and
3
0
0
@NousResearch @PrimeIntellect @PluralisHQ @flock_io @bagelopenai @JoinPond 7️⃣ Incentive Mechanism & Value Mapping. 💸 @PrimeIntellect .• TOPLOC = verifiable behavior.• Slashing + reward pool structure.🪙 @gensynai .• Submitter / Solver / Verifier / Whistleblower roles.• PoL verification game.🧩 @PluralisHQ .• Partial ownership of model weights →.
1
0
1
@NousResearch @PrimeIntellect @PluralisHQ @flock_io @bagelopenai 6️⃣ Personalized Inference & Aggregation.Who’s building APIs & interfaces to interact with the models?. ✅ @NousResearch :.• Forge: multi-agent reasoning via MCTS + MoA.• TEE_HEE: agents with cryptographic selfhood.✅ @bagelopenai :.• zkLoRA deployment layer.✅ @JoinPond :.•.
1
0
0
@NousResearch @PrimeIntellect @PluralisHQ 5️⃣ Fine-Tuning & Adaptation: Downstream Efficiency. This is where newer projects shine 👇.⚙️ @flock_io .• zkFL for private, verifiable LoRA-based fine-tuning.• Built-in incentive loop.🔒 @bagelopenai .• zkLoRA: ZK proof of model origin post-LoRA fine-tuning.• Doesn’t do.
1
0
1
@NousResearch @PrimeIntellect @PluralisHQ 4️⃣ Communication & Collaboration Optimization. Where things get complex. 🔌 @PrimeIntellect .• SHARDCAST for async weight merging.• PCCL replaces NCCL for sparse topologies.🌐 @NousResearch .• DisTrO: gradient compression (DCT + 1-bit), async + overlapped training.• Works.
1
0
0
@NousResearch 3️⃣ Model Pretraining: Foundation Models in Open Networks.✅ @PrimeIntellect :.• Built INTELLECT-2, an RL-trained model via PRIME-RL on 100+ nodes.• Supports asynchronous scheduling + trustless training loops.✅ @PluralisHQ :.• SWARM pipeline-parallel training over internet.•.
1
0
0
2️⃣ Data Discovery & Collection.The journey begins with data—yet most projects skip this stage. The exception?.🧠 @NousResearch .• Builds behavioral simulators like WorldSim, Gods & S8n.• Curates multi-modal data for cognitive AI.• Treats data as value formation, not just.
2
0
0
@bagelopenai @PondGNN @RPS_AI @gensynai 10/.📚 Catch up on the full series:.🧠 Day 1: Paradigms of AI Training 🧩 Day 2: What Tasks Fit Decentralized Training? 🧪 Day 3: Prime Intellect 🧬 Day 4: Pluralis ⚙️ Day 5: Gensyn.
🧵 The Holy Grail of Crypto AI (Day 4/10). @PluralisHQ : Asynchronous Model Parallelism Meets Structured Compression.A pioneering research-driven team tackling the hardest problems in decentralized training. Let’s dive in 👇.1️⃣. What is Pluralis?.A Web3-native AI protocol
1
0
0
@bagelopenai @PondGNN @RPS_AI @gensynai 9/.In short:.• LoRA + DPO = efficient, composable fine-tuning.• RL = adaptive intelligence for on-chain agents.• @bagelopenai / @PondGNN / @RPS_AI = real-world use cases for modular AI fine-tuning. Fine-tuning is the bridge from model to deployment.
2
0
2
@bagelopenai @PondGNN @RPS_AI 8/.🔁 @PrimeIntellectAI and @gensynai are pushing RL upstream—using it for pretraining, not just post-training. Their async RL frameworks (PRIME-RL & RL Swarm) blur the line between model evolution and decentralized consensus. Training-as-governance, anyone?.
1
0
0
@bagelopenai @PondGNN @RPS_AI 7/.🌱 Looking ahead: Reinforcement Learning will become the next-gen fine-tuning standard. Instead of static datasets, RL adapts via continuous feedback—making it perfect for dynamic agents and on-chain coordination. But… scaling RL is hard. Few teams are tackling it head-on 👇.
2
0
2
@bagelopenai @PondGNN 6/. 💧 @RPS_AI Labs: AI Liquidity Engine on Solana.RPS is applying decentralized AI directly to DeFi liquidity. Its AI models fine-tune market strategies in real-time, powering:.• UltraLiquid – AI market maker.• UltraLP – LP optimization tool.AI meets capital efficiency.
1
0
2
@bagelopenai 5/. 🌐 @PondGNN : Fine-Tuning for Graph Neural Networks (GNNs).Pond is GNN-native—built for structured data like knowledge graphs and on-chain activity. Users can upload graph data, run LoRA-like fine-tuning, and spawn agents that evolve over time. 💡 GNN + modular agents = new.
1
0
0
4/. 🧩 @bagelopenai : Verifiable Fine-Tuning with zkLoRA.Bagel integrates ZK proofs with LoRA to verify on-chain fine-tuning integrity—without exposing weights or data. Think: “Prove this LoRA was applied to LLaMA-3, without showing the model.”.🔐 zkLoRA = outcome verifiability +.
1
0
1