
TensorBlock
@tensorblock_aoi
Followers
2K
Following
305
Media
23
Statuses
39
Making AI accessible and democratic for all. https://t.co/5CODh8MUDk
Cupertino, CA
Joined July 2024
@ProductHunt We received tons of valuable feedback and suggestions from our users, huge thanks to everyone who shared their thoughts and helped us improve!
1
0
0
๐ Huge milestone!. Forge by TensorBlock just launched on @ProductHunt and ranked as the #1 Product of the Day, also featured as Best Product of the Day!.10K+ users.1B+ daily tokens served.500K+ daily requests. Thanks to everyone who supported us. We're just getting started.
1
2
10
Thank you @AMD for an incredible event and all the exciting efforts behind it. It was a pleasure to share our insights on open-source AI and the emerging era of AI agents during the workshop time. Weโve always believed in AMDโs strengths, especially in the inference space
1
0
19
๐ Excited to be a launch partner for Nautilus, powering verifiable AI inference with tamper-proof TEEs. Through our Proof of Cache protocol, every AI response on TensorBlock carries cryptographic provenance โ trust built into the compute. More to come. ๐ @tensorblock_aoi |.
๐ง TensorBlock. @tensorblock_aoi is integrating Nautilus to power verifiable AI computing tasks in its AI agent infrastructure. By running queries inside tamper-proof TEEs, and using them to secure their Proof of Cache protocol, they will ensure each AI response has.
2
2
14
We're proud to announce that TensorBlock is now officially part of the #NVIDIA Inception Program!.This partnership strengthens our mission to build the future of democratic and composable intelligence. #NVIDIAInception #AI #Startup #LLM #TensorBlock
2
2
24
QwQ-32B (@Alibaba_Qwen ) model sharding between M1 MacBook Pro (16GB) + RTX 4060 Ti. Enabling efficient inference through model quantization and cross-device parallel computing. Demonstrating production-ready performance on consumer hardware. Technical benchmarks coming soon.
13
19
167
Successfully tested running full-size Deepseek 70B model on three 8x @nvidia RTX 3080 rigs, achieving 25 tokens/s through 3-way pipeline and 8-way tensor parallelism optimization. Each rig is equipped with 8x 10GB consumer GPUs (typical crypto mining rig configuration),
2
4
75
RT @deanwang_: Just completed generating GGUF quantized versions of R1 1776 - an uncensored variant of DeepSeek R1 from @perplexity_ai @Araโฆ.
0
3
0
Successfully deployed Deepseek R1 Distilled 70B (AWQ) across 8x @nvidia RTX 3080 10G GPUs, achieving 60 tokens/s with full tensor parallelism via PCIe. Total hardware cost: $6,400. This demonstrates that consumer GPUs can deliver substantial ML inference capabilities at a
8
16
148
Running Deepseek-R1 671B locally on $6000 CPU server: FP8 achieves 1.91 tokens/s, with potential 5.01 tokens/s on DDR5. Inspired by @carrigmat 's work, we explored running the Deepseek-R1 671B model on a CPU server:. - CPU: AMD EPYC 7543 .- RAM: 16 ร 64GB Hynix PC4-25600 3200MHz
4
27
144