Explore tweets tagged as #HugoCh
@hugoch
Hugo Larcher
10 months
At @huggingface we rely on GPU-fryer 🍳 to load-test our 768 H100 GPU cluster. It runs matrix multiplications and monitors TFLOPs outliers to catch any software or hardware throttling — often a sign of cooling issues that need a hardware fix ❄️🔧. 🧵 1/2
5
29
254
@hugoch
Hugo Larcher
7 months
OMG, the U.S. just downloaded more than 5PB of DeepSeek-R1 on @huggingface in the last few days! Feeling late FOMO in Silicon Valley? 🤔🚀
2
4
22
@eliebakouch
elie
10 months
@hugoch @huggingface very nice readme 🥹
1
0
24
@hugoch
Hugo Larcher
9 months
🧠 LLM inference isn’t just about latency — it’s about consistency under load. Different workloads, configs, and hardware = very different real-world performances. At Hugging Face 🤗 we built inference-benchmarker — a simple tool to stress-test LLM inference servers. 🧵 (1/2)
2
13
39
@Hugoch_96
Hugoch
2 years
0
0
0
@mamutico1
mamutico 🇨🇷🇻🇪🇪🇸
4 months
#Whenmoneytalks Algún relacionado me dijo el hombre detrás del Carlos Valenciano, “El Jefe” es el banquero paisita Ramiro Ortiz y qué casualidad Valenciano es jefe de finanzas del hombre “ no tengo la culpa RCH”mal intencionados relacionaron a Ortiz con Albanisa(HugoCh y Ortega)
0
0
0
@hugoch
Hugo Larcher
10 days
@HKydlicek NVIDIA buying SchedMD while their SuperPods currently run on k8s says a lot 😄
0
0
4
@hugoch
Hugo Larcher
10 days
@jedisct1 Yep, CPU and network heavy workloads like Xet file reconstruction at @huggingface proved to perform poorly as WebAssembly workloads compared to containers.
0
0
2