Adrien Carreira
@XciD_
Followers
613
Following
849
Media
4
Statuses
468
Head of Infrastructure @huggingface
Lyon, France
Joined June 2011
Virus Total is now integrated to HF, LET'S GO This is kind of recursive when you think about it, VT runs ClamAV as well ๐ค In any case, it's a great move to further secure artefacts pushed to the Hub, as VT has a very comprehensive suit of AVs running all at once (>70)! Kudos
1
2
6
Wanna upgrade your agent game? With @AIatMeta , we're releasing 2 incredibly cool artefacts: - GAIA 2: assistant evaluation with a twist (new: adaptability, robustness to failure & time sensitivity) - ARE, an agent research environment to empower all! https://t.co/XzCuG8A2b8
huggingface.co
1
18
78
When @sama told me at the AI summit in Paris that they were serious about releasing open-source models & asked what would be useful, I couldnโt believe it. But six months of collaboration later, here it is: Welcome to OSS-GPT on @huggingface! It comes in two sizes, for both
91
256
3K
Please donโt download the weights all at once ๐ or our servers will melt
108
117
2K
I'm notorious for turning down 99% of the hundreds of requests every months to join calls (because I hate calls!). The @huggingface team saw an opportunity and bullied me in accepting to do a zoom call with users who upgrade to pro. I only caved under one strict condition:
21
16
175
Starting today you can run any of the 100K+ GGUFs on Hugging Face directly with Docker Run! All of it one single line: docker model run https://t.co/2no6KMYrtM Excited to see how y'all will use it
8
44
243
Thrilled to finally share what we've been working on for months at @huggingface ๐ค@pollenrobotics Our first robot: Reachy Mini A dream come true: cute and low priced, hackable yet easy to use, powered by open-source and the infinite community. Tiny price, small size, huge
241
527
3K
Today is a big day, we're introducing the first version of the HF MCP server ๐ฅ ๐งต
15
88
441
Continuing to move all the LFS bytes into Xet storage on Hugging Face! Currently up to: ๐ค 5,500 users and orgs with Xet access ๐ 150,000 Xet-backed models and datasets ๐คฏ 4+ PB managed by Xet How much more to go? If the Hub's top storage users are any indication: many bytes
1
3
6
A quick update on the future of the `transformers` library! In order to provide a source of truth for all models, we are working with the rest of the ecosystem to make the modeling code the standard. A joint effort with vLLM, LlamaCPP, SGLang, Mlx, Qwen, Glm, Unsloth, Axoloth,
26
103
1K
The entire Xet team is so excited to bring Llama 4 to the @huggingface community. Every byte downloaded comes through our infrastructure โค๏ธ ๐ค โค๏ธ ๐ค โค๏ธ ๐ค Read the release post to see more about these SOTA models. https://t.co/cGf0L1wHOX
huggingface.co
1
7
26
Meta COOKED! Llama 4 is out! Llama 4 Maverick (402B) and Scout (109B) - natively multimodal, multilingual and scaled to 10 MILLION context! BEATS DeepSeek v3๐ฅ Llama 4 Maverick: > 17B active parameters, 128 experts, 400B total parameters > Beats GPT-4o & Gemini 2.0 Flash,
16
59
420
I analyzed all public GGUF models on the Hub for the llama-cpp-python chat template vulnerability. Good news: out of 116K+ GGUF models, none are currently dangerous! ๐งต
1
4
15
Text Generation Inference v2.0.0 is the fastest open-source implementation of Cohere Command R+! Command R+ is the best open-weights model. Leveraging the power of Medusa heads, TGI achieves unparalleled speeds with a latency as low as 9ms per token for a 104B model!
9
47
209
What if you could casually access your remote GPU in HF Spaces from the comfort of your local VSCode ๐คฏ EDIT: did I mention we have an insane Infra team at @huggingface??
11
20
243
๐ค Transformers v4.35 is out, and safetensors serialization is now ๐ญ๐ก๐ ๐๐๐๐๐ฎ๐ฅ๐ญ. Saving a torch model using `save_pretrained` will now save it as a safetensors file containing only tensors. Loading files in this format provides a much safer experience, why?
2
36
146
Introducing... HF Training Cluster as a service! ๐ฅ๐ฅ Access to a large compute cluster is key for large-scale model training, but historically it's been hard to secure access to large numbers of hardware accelerators, even with a hefty budget. ๐ฐ With Training cluster as a
17
93
435
Excited to announce that Dockerfiles deployments on high-performance MicroVMs is now GA! ๐ https://t.co/oYeDz3eS55 Deploy anything using a Dockerfile in seconds at the edge with zero config. ๐ค๐ Fun fact ๐ก: @gokoyeb builders are much faster than building containers on my
1
5
26