huggingface Profile Banner
Hugging Face Profile
Hugging Face

@huggingface

Followers
509K
Following
8K
Media
447
Statuses
11K

The AI community building the future. https://t.co/VkRPD0Vclr

Joined September 2016
Don't wanna be here? Send us removal request.
@huggingface
Hugging Face
9 months
We passed 5 million users. 🥳That's 5 million of you who have signed up on the Hub 🚀 thank you for contributing to the ecosystem and making open Machine Learning happen!. We're just getting started 🤗
218
226
2K
@huggingface
Hugging Face
2 years
We just released Transformers' boldest feature: Transformers Agents. This removes the barrier of entry to machine learning. Control 100,000+ HF models by talking to Transformers and Diffusers. Fully multimodal agent: text, images, video, audio, docs. 🌎.
Tweet media one
74
809
3K
@huggingface
Hugging Face
3 years
🤗🚀
Tweet media one
90
224
2K
@huggingface
Hugging Face
2 years
Llama 2: Now on Hugging Chat 🤗🦙. Try out the 70B Chat model for free with super fast inference, web search, and powered by open-source tools!. 👉
38
436
2K
@huggingface
Hugging Face
2 years
THIS IS BIG! 👀. It's now possible to take any of the >30,000 ML apps from Spaces and run them locally (or on your own infrastructure) with the new "Run with @Docker" feature. 🔥🐳. See an app you like? Run it yourself in just 2 clicks🤯
Tweet media one
34
342
2K
@huggingface
Hugging Face
2 years
🤗 Transformers has been built by, with, and for the community. Reaching 100k ⭐ on GitHub is a testament to ML's reach and the community's will to innovate and contribute. To celebrate, we highlight 100 incredible projects in transformers' vicinity.
Tweet media one
141
288
2K
@huggingface
Hugging Face
5 years
No labeled data? No problem. The 🤗 Transformers master branch now includes a built-in pipeline for zero-shot text classification, to be included in the next release. Try it out in the notebook here:
20
411
2K
@huggingface
Hugging Face
4 years
The first part of the Hugging Face Course is finally out!. Come learn how the 🤗 Ecosystem works 🥳: Transformers, Tokenizers, Datasets, Accelerate, the Model Hub! . Share with your friends who want to learn NLP, it's free!. Come join us at
Tweet media one
23
476
1K
@huggingface
Hugging Face
4 years
🚨Transformers is expanding to Speech!🚨. 🤗Transformers v4.3.0 is out and we are excited to welcome @facebookai's Wav2Vec2 as the first Automatic Speech Recognition model to our library!. 👉Now, you can transcribe your audio files directly on the hub:
Tweet media one
17
309
1K
@huggingface
Hugging Face
2 years
SAM, the groundbreaking segmentation model from @Meta is now in available in 🤗 Transformers!.What does this mean?. 1. One line of code to load it, one line to run it.2. Efficient batching support to generate multiple masks.3. pipeline support for easier usage. More details: 🧵
Tweet media one
24
242
1K
@huggingface
Hugging Face
4 years
$40M series B! 🙏Thank you open source contributors, pull requesters, issue openers, notebook creators, model architects, twitting supporters & community members all over the 🌎! . We couldn't do what we do & be where we are - in a field dominated by big tech - without you!
55
136
1K
@huggingface
Hugging Face
2 years
Code Llama: Now on Hugging Chat 💻🦙. Try out the 34B Instruct model for free with super fast inference!. 👉
22
273
1K
@huggingface
Hugging Face
4 years
Last week, EleutherAI released two checkpoints for GPT Neo, an *Open Source* replication of OpenAI's GPT-3. These checkpoints, of sizes 1.3B and 2.7B are now available in🤗Transformers!. The generation capabilities are truly🤯, try it now on the Hub:
Tweet media one
17
302
1K
@huggingface
Hugging Face
3 years
Last week @MetaAI publicly released huge LMs, with up to ☄️30B parameters. Great win for Open-Source🎉. These checkpoints are now in 🤗transformers!.But how to use such big checkpoints?. Introducing Accelerate and .⚡️BIG MODEL INFERENCE⚡️ . Load & USE the 30B model in colab (!)⬇️
Tweet media one
14
234
1K
@huggingface
Hugging Face
2 years
The first RNN in transformers! 🤯.Announcing the integration of RWKV models in transformers with @BlinkDL_AI and RWKV community!.RWKV is an attention free model that combines the best from RNNs and transformers. Learn more about the model in this blogpost:
Tweet media one
18
261
1K
@huggingface
Hugging Face
5 years
Let’s democratize NLP for all languages! 🌎🌎🌎. Today, with v2.9.1, we are releasing 1,008 machine translation models, covering ` of 140 different languages trained by @jorgtiedemann with @marian, ported by @sam_shleifer. Find your language here: [1/4]
Tweet media one
19
343
1K
@huggingface
Hugging Face
5 years
𝗢𝗨𝗥 𝗗𝗘𝗙𝗜𝗡𝗜𝗧𝗜𝗩𝗘 𝗧𝗨𝗧𝗢𝗥𝗜𝗔𝗟 🔥. How to train a new language model from scratch using Transformers and Tokenizers. ➡️
Tweet media one
11
266
977
@huggingface
Hugging Face
5 years
Long-range sequence modeling meets 🤗 transformers! We are happy to officially release Reformer, a transformer that can process sequences as long as 500.000 tokens from @GoogleAI. Thanks a million, Nikita Kitaev and @lukaszkaiser! Try it out here:
Tweet media one
7
251
979
@huggingface
Hugging Face
2 years
How to train a Llama 2 chatbot (a step-by-step guide, designed for non-coders).
9
228
972
@huggingface
Hugging Face
5 years
We are honored to be awarded the Best Demo Paper for "Transformers: State-of-the-Art Natural Language Processing" at #emnlp2020 😍. Thank you to our wonderful team members and the fantastic community of contributors who make the library possible 🤗🤗🤗.
Tweet media one
28
134
964
@huggingface
Hugging Face
5 years
Time to push explainable AI 🔬. exBERT, the visual analysis tool to explore learned representations from @MITIBMLab is now integrated on our model pages for BERT, DistilBERT, RoBERTa, XLM & more! Just click on the tag #exbert on @huggingface’s models page:
5
271
931
@huggingface
Hugging Face
3 years
日本からの嬉しいお知らせです!rinnaが日本語で学習したJapanese Stable DiffusionがHugging Face Spacesでデモ化されました!
Tweet media one
2
270
930
@huggingface
Hugging Face
4 years
🤗 Transformers meets VISION 📸🖼️. v4.6.0 is the first CV dedicated release!. - CLIP @OpenAI, Image-Text similarity or Zero-Shot Image classification.- ViT @GoogleAI, and.- DeiT @facebookai, SOTA Image Classification. Try ViT/DeiT on the hub (Mobile too!):.
10
253
933
@huggingface
Hugging Face
3 years
🧨Diffusion models have been powering impressive ML apps, enabling DALL-E or Imagen. Introducing 🤗 diffusers: a modular toolbox for diffusion techniques, with a focus on:. 🚄Inference pipelines.⏰Schedulers.🏭Models.📃Training examples.
Tweet media one
13
193
906
@huggingface
Hugging Face
5 years
Introducing PruneBERT, fine-*P*runing BERT's encoder to the size of a high-resolution picture (11MB) while keeping 95% of its original perf!. Based on our latest work on movement pruning: Code and weights:
16
257
908
@huggingface
Hugging Face
1 month
Super happy to announce that we are acquiring @pollenrobotics to bring open-source robots to the world! 🤖. Since @RemiCadene joined us from Tesla, we’ve become the most widely used software platform for open robotics thanks to @LeRobotHF and the Hugging Face Hub. Now, we’re
Tweet media one
52
183
919
@huggingface
Hugging Face
5 years
Now that neural nets have fast implementations, a bottleneck in pipelines is tokenization: strings➡️model inputs. Welcome 🤗Tokenizers: ultra-fast & versatile tokenization led by @moi_anthony:.-encode 1GB in 20sec.-BPE/byte-level-BPE/WordPiece/SentencePiece. -python/js/rust.
Tweet media one
11
228
867
@huggingface
Hugging Face
2 years
We are looking into an incident where a malicious user took control over the Hub organizations of Meta/Facebook & Intel via reused employee passwords that were compromised in a data breach on another site. We will keep you updated 🤗.
34
138
804
@huggingface
Hugging Face
2 years
Hugging Face is now part of the PyTorch Foundation as a premier member 🤝. We have been collaborating with the PyTorch team for the past four years and are committed to supporting the project. We share an objective: to lower the barrier of entry to ML.
Tweet media one
15
133
795
@huggingface
Hugging Face
6 months
QwQ is #1 trending!
Tweet media one
16
87
798
@huggingface
Hugging Face
3 years
Hugging Faceから日本へのお知らせです!. Hugging Faceコースの日本語翻訳を始めました。東北大学のStudent Ambassadorsの皆さんのお陰で第一章の翻訳が終了しました。.今後もコツコツと翻訳していきます。.是非コースを読んでHugging Face Tranformersについて学んで、使ってみてください!.
2
223
800
@huggingface
Hugging Face
5 years
🤗Transformers v3.0 is out🔥 — [1/4]
Tweet media one
3
215
762
@huggingface
Hugging Face
5 years
Bored at home? Need a new friend?.Hang out with BART, the newest model available in transformers (thx @sam_shleifer) , with the hefty 2.6 release (notes: . Now you can get state-of-the-art summarization with a few lines of code: 👇👇👇
Tweet media one
14
205
736
@huggingface
Hugging Face
4 years
EleutherAI's GPT-J is now in 🤗 Transformers: a 6 billion, autoregressive model with crazy generative capabilities!. It shows impressive results in:.- 🧮Arithmetics.- ⌨️Code writing.- 👀NLU.- 📜Paper writing.- . Play with it to see how powerful it is:.
Tweet media one
13
170
722
@huggingface
Hugging Face
5 years
Transformers v2.2 is out, with *4* new models and seq2seq capabilities!. ALBERT is released alongside CamemBERT, implemented by the authors, DistilRoBERTa (twice as fast as RoBERTa-base!) and GPT-2 XL!. Encoder-decoder with ⭐Model2Model⭐. Available on
Tweet media one
9
199
710
@huggingface
Hugging Face
2 years
Today we are excited to announce a new partnership with @awscloud! 🔥. Together, we will accelerate the availability of open-source machine learning 🤝. Read the post 👉
10
154
700
@huggingface
Hugging Face
4 years
Document parsing meets 🤗 Transformers! . 📄#LayoutLMv2 and #LayoutXLM by @MSFTResearch are now available! 🔥 . They're capable of parsing document images (like PDFs) by incorporating text, layout, and visual information, as in the @gradio demo below ⬇️.
10
187
682
@huggingface
Hugging Face
2 months
We are excited to partner with @AIatMeta to welcome Llama 4 Maverick (402B) & Scout (109B) natively multimodal Language Models on the Hugging Face Hub with Xet 🤗. Both MoE models trained on up-to 40 Trillion tokens, pre-trained on 200 languages and significantly outperforms its
Tweet media one
25
118
711
@huggingface
Hugging Face
6 years
🥁🥁🥁 Welcome to "pytorch-transformers", the 👾 library for Natural Language Processing!
Tweet media one
7
212
684
@huggingface
Hugging Face
5 years
The 101 for text generation! 💪💪💪. This is an overview of the main decoding methods and how to use them super easily in Transformers with GPT2, XLNet, Bart, T5,. It includes greedy decoding, beam search, top-k/nucleus sampling,. : by @PatrickPlaten
Tweet media one
11
202
676
@huggingface
Hugging Face
5 years
Transformers 2.4.0 is out 🤗. - Training transformers from scratch is now supported.- New models, including *FlauBERT*, Dutch BERT, *UmBERTo*.- Revamped documentation.- First multi-modal model, MMBT from @facebookai, text & images. Bye bye Python 2 🙃.
7
163
659
@huggingface
Hugging Face
4 years
Fine-tuning a *3-billion* parameter model on a single GPU?. Now possible in transformers, thanks to the DeepSpeed/Fairscale integrations!. Thank you @StasBekman for the seamless integration, and thanks to @microsoft and @facebookai teams for their support!.
8
176
653
@huggingface
Hugging Face
5 years
┏━━┓┏━━┓┏━━┓┏━━┓.┗━┓┃┃┏┓┃┗━┓┃┃┏┓┃.┏━┛┃┃┃┃┃┏━┛┃┃┃┃┃.Solving Natural Language Processing!.┃┏━┛┃┃┃┃┃┏━┛┃┃┃┃.┃┗━┓┃┗┛┃┃┗━┓┃┗┛┃.┗━━┛┗━━┛┗━━┛┗━━┛.
7
78
639
@huggingface
Hugging Face
5 years
You can now visualize Transformers training performance with a seamless @weights_biases integration. Compare hyperparameters, output metrics, and system stats like GPU utilization across your models!. Step-by-step guide: Colab:
8
173
630
@huggingface
Hugging Face
6 years
💃PyTorch-Transformers 1.1.0 is live💃. It includes RoBERTa, the transformer model from @facebookai, current state-of-the-art on the SuperGLUE leaderboard! Thanks to @myleott @julien_c @LysandreJik and all the 100+ contributors!
Tweet media one
6
190
632
@huggingface
Hugging Face
2 years
It's been an exciting year for 🤗Transformers. We tripled the number of weekly active users over 2022, with over 1M users most weeks now and 300k daily pip installs on average🤯
Tweet media one
10
82
624
@huggingface
Hugging Face
5 years
🚨New release alert 🚨BERT, RoBERTa, GPT2, TransformerXL and most of the community models are now an order of magnitude faster thanks to the integration of the tokenizers library! . Check it out here:
Tweet media one
3
141
622
@huggingface
Hugging Face
4 years
🔥Fine-Tuning @facebookai's Wav2Vec2 for Speech Recognition is now possible in Transformers🔥. Not only for English but for 53 Languages🤯. Check out the tutorials:.👉 Train Wav2Vec2 on TIMIT 👉 Train XLSR-Wav2Vec2 on Common Voice.
Tweet media one
6
151
617
@huggingface
Hugging Face
5 years
GPT-3 from @OpenAI got you interested in zero-shot and few-shot learning? You're lucky because our own @joeddav has just released a demo of zero-shot topic classification!. Test how the model can predict a topic it has NEVER been trained on: 🤯🤯🤯
Tweet media one
10
173
615
@huggingface
Hugging Face
5 years
How big should my language model be? As NLP researchers and practitioners, that question is central. We have built a tool that calculates an optimal model size and training time for your budget so you don't have to. See it in action at [1/2]
Tweet media one
7
153
615
@huggingface
Hugging Face
5 years
Want speedy transformers models w/o a GPU?! 🧐.Starting with transformers v3.1.0 your models can now run at the speed of light on commodity CPUs thanks to ONNX Runtime quantization!🚀. Check out our 2nd blog post with ONNX Runtime on the subject! 🔥.
3
167
618
@huggingface
Hugging Face
5 years
We spend our time finetuning models on tasks like text classif, NER or question answering. Yet 🤗Transformers had no simple way to let users try these fine-tuned models. Release 2.3.0 brings Pipelines: thin wrappers around tokenizer + model to ingest/output human-readable data.
Tweet media one
4
148
605
@huggingface
Hugging Face
3 years
The Technology Behind BLOOM Training🌸. Discover how @BigscienceW used @MSFTResearch DeepSpeed + @nvidia Megatron-LM technologies to train the World's Largest Open Multilingual Language Model (BLOOM):.
8
148
619
@huggingface
Hugging Face
1 year
We're having some infra issues; we're working on it. Please send hugs! 🤗 . In the meantime, . import os.os.environ['HF_HUB_OFFLINE']=1.
119
56
606
@huggingface
Hugging Face
8 months
GoogleからGemma-2-JPNがリリースされました!このモデルはGemma 2 2Bを日本語でfine-tuneしたものです。Gemma 2の英語での性能と同レベルの性能で日本語をサポートします。.モデル一覧: .
6
175
579
@huggingface
Hugging Face
4 years
🔥We're launching the new and it's incredible. 🚀Play live with +10 billion parameters models, deploy them instantly in production with our hosted API, join the 500 organizations using our hub to host/share models & datasets. And one more thing. 👇
Tweet media one
15
129
579
@huggingface
Hugging Face
3 years
💫 Perceiver IO by @DeepMind is now available in 🤗 Transformers! . A general purpose deep learning model that works on any modality and combinations thereof.📜text.🖼️ images.🎥 video.🔊 audio.☁️ point clouds. Read more in our blog post:
Tweet media one
4
112
592
@huggingface
Hugging Face
4 years
Release alert: the 🤗datasets library v1.2 is available now!. With:.- 611 datasets you can download in one line of python.- 467 languages covered, 99 with at least 10 datasets.- efficient pre-processing to free you from memory constraints. Try it out at:.
Tweet media one
1
152
588
@huggingface
Hugging Face
5 years
The ultimate guide to encoder-decoder models!. Today, we're releasing part one explaining how they work and why they have become indispensable for NLG tasks such as summarization and translation. > Subscribe for the full series:
Tweet media one
5
141
580
@huggingface
Hugging Face
3 years
🖌️ Stable Diffusion meets 🧨Diffusers!. Releasing diffusers==0.2.2 with full support of @StabilityAI's Stable Diffusion & schedulers 🔥. Google colab:.👉 Code snippet 👇
Tweet media one
7
120
575
@huggingface
Hugging Face
1 year
Hugging Face 🫶 @GoogleColab . With the latest release of huggingface_hub, you don't need to manually log in anymore. Create a secret once and share it with every notebook you run. 🤗. pip install --upgrade huggingface_hub . Check it out!👇
5
107
563
@huggingface
Hugging Face
5 years
Today we're happy to release four new official notebook tutorials available in our documentation and in colab thanks to @MorganFunto to get started with tokenizers and transformer models in just seconds! (1/6)
Tweet media one
10
151
551
@huggingface
Hugging Face
6 years
The 1.5 billion parameter GPT-2 (aka gpt2-xl) is up:.✅ in the transformers repo: ✅ try it out live in Write With Transformer🦄 Coming next: .🔘 Detector model based on RoBERTa . Thanks @OpenAI @Miles_Brundage @jackclarkSF and all.
9
149
546
@huggingface
Hugging Face
6 years
This is SO meta 🤓. We trained a generative language model on a dataset of ArXiv NLP papers. You can now get a neural net to write your papers for (with?) you 🔥. We heard from a few researchers that they're already using it in submitted papers.
Tweet media one
12
140
534
@huggingface
Hugging Face
5 months
@deepseek_ai Congratulations on the stellar release! 🤩. The model checkpoints and a detailed report - truly Christmas is here! .
4
17
550
@huggingface
Hugging Face
3 years
Transformers v4.22 is out, and includes the first VIDEO models! 🎥. 💥VideoMAE: masked auto-encoders for video.💥X-CLIP: CLIP for video-language. Other nice goodies:.💥Swin Transformer v2.💥Pegasus-X.💥Donut.💥MobileViT. and MacOS support (device="mps")!
Tweet media one
2
95
534
@huggingface
Hugging Face
2 years
TRL 🤗 Hugging Face. Excited to announce that we're doubling down on our efforts to democratize RLHF and reinforcement learning with TRL, new addition to the @huggingface family, developed and led by team member @lvwerra 🎉🎉. Train your first RLHF model 👉
Tweet media one
7
121
520
@huggingface
Hugging Face
4 years
🔥JAX meets Transformers🔥. @GoogleAI's JAX/Flax library can now be used as Transformers' backbone ML library. JAX/Flax makes distributed training on TPU effortless and highly efficient!. 👉 Google Colab: 👉 Runtime evaluation: .
Tweet media one
3
109
519
@huggingface
Hugging Face
4 years
The new SOTA is in Transformers! DeBERTa-v2 beats the human baseline on SuperGLUE and up to a crazy 91.7% dev accuracy on MNLI task. Beats T5 while 10x smaller!. DeBERTa-v2 contributed by @Pengcheng2020 from @MSFTResearch . Try it directly on the hub:
Tweet media one
5
110
523
@huggingface
Hugging Face
5 years
1/4. Four NLP tutorials are now available on .@kaggle.! It's now easier than ever to leverage tokenizers and transformer models like BERT, GPT2, RoBERTa, XLNet, DistilBERT,. for your next competition! 💪💪💪! #NLProc #NLP #DataScience #kaggle.
3
146
514
@huggingface
Hugging Face
2 years
Zephyr 7b beats Llama 70b on MT Bench 🤯🤯🤯
Tweet media one
11
83
510
@huggingface
Hugging Face
2 years
📣 Calling all game dev and AI enthusiasts!🎮. Already 400 people signed up for the first Open Source AI Game Jam, where you'll use AI tools to make a game in a weekend🔥. Sign up here 👉 What AI tools? Let's focus today on Audio tools 🔊.⬇️
Tweet media one
9
120
517
@huggingface
Hugging Face
4 years
We've heard your requests! Over the past few months . we've been working on a Hugging Face Course!. The release is imminent. Sign-up for the newsletter to know when it comes out: Sneak peek; Transfer Learning with @GuggerSylvain:
9
122
509
@huggingface
Hugging Face
1 year
The Open Source community is amazing 🤗.
19
47
486
@huggingface
Hugging Face
4 years
GPT-Neo, the #OpenSource cousin of GPT3, can do practically anything in #NLP from sentiment analysis to writing SQL queries: just tell it what to do, in your own words. 🤯. How does it work? 🧐.Want to try it out? 🎮.👉
8
146
494
@huggingface
Hugging Face
3 years
Scikit-Learn and 🤗 join forces!. With a growing number of tabular classification & regression checkpoints, we believe statistical ML has its place on the HF Hub. We're excited to partner with sklearn, statistical ML champion, and move forward together.
Tweet media one
4
92
501
@huggingface
Hugging Face
5 years
Thanks to @srush_nlp, we now have an example of a training module for NER leveraging transformers. Under 300 lines of codes and supports GPUs and TPUs thanks to @PyTorchLightnin!. Colab: . Example:
Tweet media one
4
132
495
@huggingface
Hugging Face
5 years
🔥Transformers' first-ever end-2-end multimodal demo was just released, leveraging LXMERT, SOTA model for visual Q&A!. Model by @HaoTan5, @mohitban47, with an impressive implementation in Transformers by @avalmendoz (@UNCnlp). Notebook available here: 🤗
6
128
502
@huggingface
Hugging Face
3 years
Transformers v4.13.0 is out and it is *big*:. Vision:.- 🖼️ SegFormer.- 🖨️ ImageGPT. Audio:.- 🔡 Language model support for ASR. Multimodal:.- ⚖️ Vision-Text dual encoders. NLP:.- 🔣 mLUKE.- 🏅 DeBERTa-v3. Trainer:.- 1⃣6⃣ The Trainer now supports BF16/TF32!. 🌠New doc frontend 🌠
Tweet media one
5
89
497
@huggingface
Hugging Face
4 years
TODAY'S A BIG DAY. Spaces are now publicly available. Build, host, and share your ML apps on @huggingface in just a few minutes. There's no limit to what you can build. Be creative, and share what you make with the community. 🙏 @streamlit and @gradio .
Tweet media one
4
135
491
@huggingface
Hugging Face
3 years
Open Source.
9
42
476
@huggingface
Hugging Face
5 years
👋 To all JS lovers: NLP is more accessible than ever! You can now leverage the power of DistilBERT-cased for Question Answering w/ just 3 lines of code!!! 🤗. You can even run the model remotely w/ the built-in @TensorFlow Serving compatibility 🚀.
Tweet media one
10
125
493
@huggingface
Hugging Face
9 months
Which is your favorite open LLM? Why? 🤗.
129
36
478
@huggingface
Hugging Face
2 years
At Hugging Face, we are working to enable you to easily build and serve your own LLMs 🧑‍💻👨‍💻👩‍💻.In this blog, we talk about the amazing world of open-source LLMs, the challenges, and how the Hugging Face ecosystem can help you 🪐.Read about them here 👉
Tweet media one
7
116
476
@huggingface
Hugging Face
4 years
We're thrilled to partner with to create some great new content for their NLP Specialization on Coursera! . With this update, you can access exciting new material and lectures that cover the state of the art in NLP 🧑‍🏫.
5
76
470
@huggingface
Hugging Face
4 years
🥁 We can't wait to share our new inference product with you! 🤩. - it achieves 1ms latency on Transformer models 🏎.- you can deploy it in your own infrastructure ⚡️.- we call it: 🤗 Infinity 🚀. 📅 Join us for a live event and demo on 9/28!.
Tweet media one
13
62
480
@huggingface
Hugging Face
5 years
Happy to officially include DialoGPT from @MSFTResearch to 🤗transformers (see docs: . DialoGPT is the first conversational response model added to the library. Now you can build a state-of-the-art chatbot in just 10 lines of code 👇👇👇
Tweet media one
5
117
472
@huggingface
Hugging Face
5 years
Transformers v3.1.0 is out, first pypi release with 💫 PEGASUS, DPR, mBART 💫. 📖 New & simpler docs and tutorials.🎤 Dialogue & zero-shot pipelines.⭐️ New encoder-decoder architectures: Bert2GPT2, Roberta2Roberta, Longformer2Roberta, . 📕 Named outputs:
Tweet media one
8
123
467
@huggingface
Hugging Face
3 years
Last week, @MetaAI introduced NLLB-200: a massive translation model supporting 200 languages. Models are now available through the Hugging Face Hub, using 🤗Transformers' main branch. Models on the Hub: Learn about NLLB-200:
Tweet media one
6
140
458
@huggingface
Hugging Face
6 years
Our Distilbert paper just got accepted at NeurIPS 2019's ECM2 workshop!. - 40% smaller 60% faster than BERT.- 97% of the performance on GLUE. We also distilled GPT2 in an 82M params model💥. All the weights are available in TF2.0 @tensorflow here:
Tweet media one
4
97
463
@huggingface
Hugging Face
3 years
Machine learning demos are increasingly a vital part of releasing a model. Demos allow anyone, not just ML engineers, to try a model, give feedback on predictions, and build trust. That's why we are thrilled to announce @Gradio 3.0: a grounds-up redesign of the Gradio library 🥳
Tweet media one
7
92
455
@huggingface
Hugging Face
4 years
🤗Transformers are starting to work with structured databases!. We just released 🤗Transformers v4.1.1 with TAPAS, a multi-modal model for question answering on tabular data from @googleAI. Try it out through transformers or our inference API:
Tweet media one
5
92
453
@huggingface
Hugging Face
2 years
🚨Exciting news! Next week, we’ll be launching a brand-new Audio Course! 🤗. Sign up today ( and join us for a LIVE course launch event featuring amazing guests like @DynamicWebPaige, Seokhwan Kim, and @functiontelechy! ⚡️.
Tweet media one
4
96
445
@huggingface
Hugging Face
5 years
Happy to announce we partenered with @onnxai @onnxruntime @microsoft to make state-of-the-art inference up to 5x faster 🚀. NLP for every people and organizations! #msbuild
3
102
441
@huggingface
Hugging Face
5 years
Our API now includes a brand new pipeline: zero-shot text classification. This feature lets you classify sequences into the specified class names out-of-the-box w/o any additional training in a few lines of code! 🚀. Try it out (and share screenshots 📷):
Tweet media one
Tweet media two
Tweet media three
Tweet media four
12
106
439
@huggingface
Hugging Face
4 years
🤗We are going to invest more in @tensorflow in 2021! . If you want to take part in building the fastest growing NLP open-source library, join us:
9
78
422
@huggingface
Hugging Face
4 years
20,000+ machine learning models connected to 3,000+ apps? Hugging Face meets Zapier! 🤯🤯🤯. With the Hugging Face API, you can now easily connect models right into apps like Gmail, Slack, Twitter, and more: [1/2]
Tweet media one
7
77
423
@huggingface
Hugging Face
2 years
🧨Diffusers supports Stable Diffusion 2 !. Run @StabilityAI's Stable Diffusion 2 with zero changes to your code using your familiar diffusers API. Everything is supported: attention optimizations, fp16, img2image, swappable schedulers, and more🤗
Tweet media one
5
82
434
@huggingface
Hugging Face
1 year
We're excited to collaborate with the Europan Space Agency for the release of MajorTOM, the largest ML-ready Sentinel-2 images dataset! 🚀. It covers 50% of the Earth. 2.5 trillion pixels of open source!. 🤗👐🌌🚀🌏.
@esa
European Space Agency
1 year
Our @ESA_EO Φ-lab has released, in partnership with @huggingface, the first dataset of 'MajorTOM', or the Terrestrial Observation Metaset, the largest community-oriented and machine-learning-ready collection of @CopernicusEU #Sentinel2 images ever published and covering over 50%
Tweet media one
6
68
413
@huggingface
Hugging Face
5 years
Transformers v2.9 is out, with a built-in Trainer and TFTrainer 🔥. This let us reorganize the example scripts completely for a cleaner codebase. - Same user-facing API for PyTorch and TF 2.- Support for GPU, Multi-GPU, and TPU.- Easier than ever to share your fine-tuned models
Tweet media one
6
124
426