tensorblock_aoi Profile Banner
TensorBlock Profile
TensorBlock

@tensorblock_aoi

Followers
2K
Following
305
Media
23
Statuses
39

Making AI accessible and democratic for all. https://t.co/5CODh8MUDk

Cupertino, CA
Joined July 2024
Don't wanna be here? Send us removal request.
@tensorblock_aoi
TensorBlock
16 hours
Tweet media one
0
0
0
@tensorblock_aoi
TensorBlock
16 hours
Tweet media one
1
0
0
@tensorblock_aoi
TensorBlock
16 hours
@ProductHunt We received tons of valuable feedback and suggestions from our users, huge thanks to everyone who shared their thoughts and helped us improve!
Tweet media one
1
0
0
@tensorblock_aoi
TensorBlock
16 hours
๐Ÿš€ Huge milestone!. Forge by TensorBlock just launched on @ProductHunt and ranked as the #1 Product of the Day, also featured as Best Product of the Day!.10K+ users.1B+ daily tokens served.500K+ daily requests. Thanks to everyone who supported us. We're just getting started.
Tweet media one
1
2
10
@tensorblock_aoi
TensorBlock
2 days
๐Ÿšจ TensorBlock Forge is now live on Product Hunt! One API, all your AI models. OpenAI-compatible, privacy-first, and prod-ready. Help us hit the front page by leaving a comment and upvote!.
Tweet media one
Tweet media two
Tweet media three
Tweet media four
4
7
33
@tensorblock_aoi
TensorBlock
27 days
Thank you @AMD for an incredible event and all the exciting efforts behind it. It was a pleasure to share our insights on open-source AI and the emerging era of AI agents during the workshop time. Weโ€™ve always believed in AMDโ€™s strengths, especially in the inference space
Tweet media one
Tweet media two
Tweet media three
1
0
19
@tensorblock_aoi
TensorBlock
1 month
๐Ÿš€ Excited to be a launch partner for Nautilus, powering verifiable AI inference with tamper-proof TEEs. Through our Proof of Cache protocol, every AI response on TensorBlock carries cryptographic provenance โ€” trust built into the compute. More to come. ๐Ÿ”— @tensorblock_aoi |.
@SuiNetwork
Sui
1 month
๐Ÿง  TensorBlock. @tensorblock_aoi is integrating Nautilus to power verifiable AI computing tasks in its AI agent infrastructure. By running queries inside tamper-proof TEEs, and using them to secure their Proof of Cache protocol, they will ensure each AI response has.
2
2
14
@tensorblock_aoi
TensorBlock
2 months
Tweet media one
1
1
10
@tensorblock_aoi
TensorBlock
3 months
We're proud to announce that TensorBlock is now officially part of the #NVIDIA Inception Program!.This partnership strengthens our mission to build the future of democratic and composable intelligence. #NVIDIAInception #AI #Startup #LLM #TensorBlock
Tweet media one
2
2
24
@tensorblock_aoi
TensorBlock
3 months
Weโ€™re tackling a key challenge in decentralized AI: efficient inference verification. Current methods = costly & redundant. Our solution: Proof of Cache โ€“ lightweight, deterministic, and scalable. ๐Ÿ” Check out how TensorBlock is making decentralized AI practical.
Tweet media one
Tweet media two
Tweet media three
0
4
18
@tensorblock_aoi
TensorBlock
3 months
Hello April. ๐Ÿšง Something exciting is brewing. Weโ€™re unlocking the next chapter. Not just one drop โ€” a wave. Stay tuned. Itโ€™s almost time to mine thyself.
5
2
15
@tensorblock_aoi
TensorBlock
4 months
QwQ-32B (@Alibaba_Qwen ) model sharding between M1 MacBook Pro (16GB) + RTX 4060 Ti. Enabling efficient inference through model quantization and cross-device parallel computing. Demonstrating production-ready performance on consumer hardware. Technical benchmarks coming soon.
13
19
167
@tensorblock_aoi
TensorBlock
5 months
Successfully tested running full-size Deepseek 70B model on three 8x @nvidia RTX 3080 rigs, achieving 25 tokens/s through 3-way pipeline and 8-way tensor parallelism optimization. Each rig is equipped with 8x 10GB consumer GPUs (typical crypto mining rig configuration),
2
4
75
@tensorblock_aoi
TensorBlock
5 months
RT @deanwang_: Just completed generating GGUF quantized versions of R1 1776 - an uncensored variant of DeepSeek R1 from @perplexity_ai @Araโ€ฆ.
0
3
0
@tensorblock_aoi
TensorBlock
5 months
For more detailed discussion and setup configurations, check out our Reddit post here:
0
3
8
@tensorblock_aoi
TensorBlock
5 months
Successfully deployed Deepseek R1 Distilled 70B (AWQ) across 8x @nvidia RTX 3080 10G GPUs, achieving 60 tokens/s with full tensor parallelism via PCIe. Total hardware cost: $6,400. This demonstrates that consumer GPUs can deliver substantial ML inference capabilities at a
8
16
148
@tensorblock_aoi
TensorBlock
5 months
Running Deepseek-R1 671B locally on $6000 CPU server: FP8 achieves 1.91 tokens/s, with potential 5.01 tokens/s on DDR5. Inspired by @carrigmat 's work, we explored running the Deepseek-R1 671B model on a CPU server:. - CPU: AMD EPYC 7543 .- RAM: 16 ร— 64GB Hynix PC4-25600 3200MHz
4
27
144