
Clem Delangue 🤗
@ClemDelangue
Followers
1K
Following
85
Media
16
Statuses
68
Co-founder & CEO at Hugging Face 🤗. We teach computers to understand human language.
Brooklyn, NY
Joined October 2018
RT @soumithchintala: The first full paper on @pytorch after 3 years of development. It describes our goals, design principles, technical de….
0
415
0
RT @Thom_Wolf: Interesting work (and a nice large and clean dataset as well, looking forward to see it released):."Compressive Transformers….
0
79
0
RT @algo_diver: Some more results. Now I made it fully supported by all kinds of model and vocabs. Good experience to use @huggingface wi….
0
1
0
RT @Thom_Wolf: The @SustaiNLP2020 workshop at #EMNLP2020 will try to remove a little bit of SOTA addiction from NLP research 😉. We'll promo….
0
34
0
RT @fchollet: Perhaps a great opportunity to use @huggingface's TF 2.0 Transformer implementations :).
0
9
0
RT @timothy_lkh_: Happy to have a small PR accepted to the HuggingFace Transformer library demonstrating substantial mixed precision speed-….
0
3
0
RT @julien_c: GPT-2 on device is blazing fast on iPhone 11 ⚡️. Core ML 3 is officially out so we can do state-of-the-art text generation on….
0
152
0
RT @kyoun: DistilBERT (huggingface) BERT baseから蒸留にて6層に小型化(40%減)。推論は60%高速化、精度はGLUEで95%程度保持。8個の16GB V100 GPUで3.5日ぐらいで学習。hidden sizeは768のままで、層….
0
23
0
RT @julien_c: 1,060 days ago, @Thom_Wolf and I launched a Deep learning for NLP study group:.
medium.com
A remote study group to Stanford’s CS224d “Deep learning for NLP” class
0
16
0
RT @huggingface: 💃PyTorch-Transformers 1.1.0 is live💃. It includes RoBERTa, the transformer model from @facebookai, current state-of-the-ar….
0
190
0
RT @Thom_Wolf: A question I get from time to time is how to convert a pretrained TensorFlow model in PyTorch easily and reliably. We're st….
medium.com
Friends and users of our open-source tools are often surprised how fast 🚀 we reimplement the latest SOTA pretrained TensorFlow models to…
0
117
0
RT @Thom_Wolf: New release of Transformers repo is shaping up & I'm very excited!.Gifts for all:.-SOTA Lovers: new XLNet & XLM archi + 6 ne….
0
112
0
RT @julien_c: 🔥 Thrilled to release our Swift Core ML implementation of BERT for question answering.🔥🔥. Transformers models now also live o….
0
92
0
Best Long Paper #naacl2019.BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova #NLProc
0
3
18
RT @soldni: Absolutely PACKED room for @seb_ruder, @Thom_Wolf, @swabhz, and @mattthemathman’s tutorial on transfer learning for NLP #NAACL2….
0
11
0