tplr_ai Profile Banner
templar Profile
templar

@tplr_ai

Followers
2K
Following
445
Media
20
Statuses
344

incenτivised inτerneτ-wide τraining

subnet 3
Joined March 2025
Don't wanna be here? Send us removal request.
@tplr_ai
templar
4 months
@const_reborn on why tplr.
16
35
141
@tplr_ai
templar
9 hours
RT @JosephJacks_: Holy … Bittensor.
0
13
0
@tplr_ai
templar
1 day
RT @too_da_moon_: @tplr_ai Templar miners be like 😁
0
2
0
@tplr_ai
templar
1 day
Soon to be Templar Miner.
@563defi
563 blocmates
1 day
1
1
14
@tplr_ai
templar
1 day
Templar Miner.
@a_choji
hoogee
1 day
@tplr_ai 🥹.
0
2
11
@tplr_ai
templar
1 day
Templar Miner.
@racim_isn0w
R.L_isn0w
1 day
0
2
15
@tplr_ai
templar
1 day
2 days into #CCloco , and miners are already converging 7.5 times faster!. "We are yet to see a task well incentivised miners could not achieve". We plan to push them much harder. #Accelerate
Tweet media one
10
19
113
@tplr_ai
templar
3 days
RT @const_reborn: I mean, you need to come up with an abstract incentive layer and reinvent all of torch distributed to work on block time….
0
2
0
@tplr_ai
templar
3 days
RT @const_reborn: Blockchains and decentralized training work so well together
Tweet media one
0
20
0
@tplr_ai
templar
3 days
Blogpost:
@tplr_ai
templar
3 days
Today, Templar enters a new era with CCLoco and the launch of Templar Protocol v1.0.0. Our journey began with Gauntlet (, pioneering incentive design for permissionless AI training.
1
5
27
@tplr_ai
templar
3 days
Anyone in the world is free is join now, no queues, no waitlists, only incentives. Join us:
0
0
0
@tplr_ai
templar
3 days
The Templar Protocol v1.0.0 represents the convergence of our work on incentives, permissionless systems, and communication efficiency.
0
0
1
@tplr_ai
templar
3 days
This isn't just research – CCLoco powers TEMPLAR II right now, proving that decentralized AI training is both economically viable and technically competitive.
2
0
2
@tplr_ai
templar
3 days
The results are transformative. Nodes can now process 15x more data between synchronizations. Contributors with standard internet connections can participate effectively.
1
0
0
@tplr_ai
templar
3 days
Second, we chunk weight tensors into 64x64 blocks before compression, improving efficiency and reducing scaling issues. Third, we discovered that DCT transforms actually harm performance in multi-iteration settings - a counterintuitive finding that unlocked major improvements.
1
0
0
@tplr_ai
templar
3 days
Three key innovations made this possible. First, we removed DiLoCo's outer momentum, which conflicted with compression error feedback.
0
1
10
@tplr_ai
templar
3 days
By combining local multi-iteration training with advanced gradient compression, we've achieved a 50x reduction in communication overhead. CCLoco needs just 2.8 GB while maintaining model quality.
2
2
11
@tplr_ai
templar
3 days
Our previous approach with DeMo compression took too long between synchronizations, limiting scalability. Other methods like DiLoCo still sent 114.3 GB per training run - impractical for global collaboration. CCLoco (Chunk Compressed DiLoCo) is our answer.
2
1
9
@tplr_ai
templar
3 days
The challenge was clear: training LLMs across the internet required exchanging massive gradient updates between nodes. Even with our permissionless infrastructure, communication costs were the bottleneck limiting true decentralization.
1
0
8
@tplr_ai
templar
3 days
Now, with CCLoco, we've conquered the final frontier: communication efficiency. CCLoco is already powering TEMPLAR II, our latest production training run .
1
0
7