Wauplin
@Wauplin
Followers
1K
Following
3K
Media
68
Statuses
272
Doing things at Hugging Face. Maintainer of π€/huggingface_hub.
Joined August 2015
With the latest update to the @huggingface CLI you have hf cache ls/prune/rm/verify Want to delete everything in cache? `hf cache rm $(hf cache ls -q)` does the trick. Thank you @Wauplin @hanouticelina π PS: Call me the cache janitor now.
1
1
6
Tired of cloudflare issues and don't want to rely on public APIs? Don't forget you are always 1 command line away from deploying a private instance of any open-source LLM: `uvx hf endpoints catalog deploy --repo openai/gtp-oss-20b`
0
2
9
@astral_sh A big thank you to @thorwhalen1 for generously transferring the hf package name to us on PyPI. This will make the CLI much more accessible for all Hugging Face users. π€
0
0
5
Using the hf CLI just got insanely easy: uvx hf auth login uvx hf download uvx hf ... Always runs the latest version in an isolated environment. No install or updates required thanks to @astral_sh 's awesome tooling!
6
8
70
Who said we should stop shipping after v1.0 ?
We're definitely not slowing down - it's only been 2 weeks since v1.0, but πππππππππππ_πππ v1.1 is already here packed with a lot of nice features! π₯ This update brings a faster and cleaner download experience and new powerful features to the Hugging Face CLI! $
0
0
4
We're definitely not slowing down - it's only been 2 weeks since v1.0, but πππππππππππ_πππ v1.1 is already here packed with a lot of nice features! π₯ This update brings a faster and cleaner download experience and new powerful features to the Hugging Face CLI! $
3
7
55
Feels really great to be partnering with @wavespeed_ai on this! π€
π WaveSpeedAI x HuggingFace: Strategic Partnership WaveSpeedAI has officially joined forces with @huggingface as an Inference Provider, bringing creators a faster, smoother, and more consistent generation experience. π‘ Highlights: Up to 3Γ faster inference performance
0
0
2
Earlier this week, we released v1.0 of huggingface_hub. But to get there, it took 5 years, 35+ releases, and 280+ contributors. So we wrote a blog post to summarize the journey:
huggingface.co
0
0
6
We finally stopped being scared of v1.0 π
Hugging Face Hub just hit v1.0, a testament to how far the ecosystem (and API!) have come. Here's to the next decade of open ML π https://t.co/M9RwcqGbc9
huggingface.co
π₯ We're thrilled to announce πππππππππππ_πππ v1.0! After five years of development, this foundational release is packed with A fully modernized HTTP backend and a complete, from-the-ground-up CLI revamp! $ pip install huggingface_hub --upgrade π§΅highly recommend
1
1
15
π₯ We're thrilled to announce πππππππππππ_πππ v1.0! After five years of development, this foundational release is packed with A fully modernized HTTP backend and a complete, from-the-ground-up CLI revamp! $ pip install huggingface_hub --upgrade π§΅highly recommend
8
40
319
Have you noticed different performance between open model inference providers? You can now run evals across providers with the @huggingface inference providers integration in InspectAI.
3
4
17
Last Stop Before 1.0! We've shipped huggingface_hub v0.36.0 today with performance optimizations in HfFileSystem. Streaming from the Hub is now the go-to solution for large distributed training! And yes, you've heard it. This is the last release before the long-awaited v1.0!
0
3
9
huggingface_hub v0.35.0 is here! Major highlights: β° Scheduled Jobs: Run GPU cron jobs on the Hub with full CLI support π₯ Image-to-video generation powered by @FAL β‘ New providers: welcome @Scaleway_fr & PublicAI! More details in
github.com
Scheduled Jobs In v0.34.0 release, we announced Jobs, a new way to run compute on the Hugging Face Hub. In this new release, we are announcing Scheduled Jobs to run Jobs on a regular basic. Think &...
0
2
4
Awesome work from @hanouticelina and the team to use SoTA open-source LLMs with GitHub Copilot in @code without vendor lock-in! π₯
Starting today, you can use Hugging Face Inference Providers directly in GitHub Copilot Chat on @code! π₯ which means you can access frontier open-source LLMs like Qwen3-Coder, gpt-oss and GLM-4.5 directly in VS Code, powered by our world-class inference partners -
0
0
2
After 5 years and 1.4B+ installs, huggingface_hub 1.0 is coming! Expect a smoother UX with httpx & typer, plus removal of legacy code while keeping migration as painless as possible. Want to be part of it? Share your comments on https://t.co/vraUTFMq0v
1
0
7
π Hello ML community! Trackio is here - the free, local-first experiment tracking library from @huggingface π€ β
Drop-in replacement for wandb β
Track locally, share via Spaces β
Zero setup, zero cost β
<1000 lines of Python Ready to simplify your ML experiments?
0
5
14
Vibecoding is great but itβs often a pain to quickstart a project. So I made a Hugging Face Space template (FastAPI + React + SQLite + HF login), all wired up for local dev + Spaces The kicker: a PROMPT file so Claude/Cursor can take over the vibecoding of UI + API. π§΅
3
3
48
3/The new Responses API is where things get wild: β
Adjustable reasoning effort levels β
Built for agentic workflows (function calling, structured output, etc.) β
Remote MCP calls to external services This isn't your typical chat endpoint.
1
0
1