Explore tweets tagged as #HugoCh
At @huggingface we rely on GPU-fryer 🍳 to load-test our 768 H100 GPU cluster. It runs matrix multiplications and monitors TFLOPs outliers to catch any software or hardware throttling — often a sign of cooling issues that need a hardware fix ❄️🔧. 🧵 1/2
5
29
254
OMG, the U.S. just downloaded more than 5PB of DeepSeek-R1 on @huggingface in the last few days! Feeling late FOMO in Silicon Valley? 🤔🚀
2
4
22
🧠 LLM inference isn’t just about latency — it’s about consistency under load. Different workloads, configs, and hardware = very different real-world performances. At Hugging Face 🤗 we built inference-benchmarker — a simple tool to stress-test LLM inference servers. 🧵 (1/2)
2
13
39
#Whenmoneytalks Algún relacionado me dijo el hombre detrás del Carlos Valenciano, “El Jefe” es el banquero paisita Ramiro Ortiz y qué casualidad Valenciano es jefe de finanzas del hombre “ no tengo la culpa RCH”mal intencionados relacionaron a Ortiz con Albanisa(HugoCh y Ortega)
0
0
0
@HKydlicek NVIDIA buying SchedMD while their SuperPods currently run on k8s says a lot 😄
0
0
4
@jedisct1 Yep, CPU and network heavy workloads like Xet file reconstruction at @huggingface proved to perform poorly as WebAssembly workloads compared to containers.
0
0
2