Greg Chatel
@rodgzilla
Followers
143
Following
601
Media
15
Statuses
300
C.S. PhD, Lead R&D @ Disaitek, AI innovator @ Intel.
France
Joined April 2012
Mathematics and beauty. Chaotic systems. Strange attractors. More info: https://t.co/cqSv9aaB5s
0
47
223
JupyterLab 3.0 is released! - visual debugger - support for multiple display languages - table of content for notebooks - improved extension system. Check out the announcement blog post. https://t.co/pUBiZEYH4c
blog.jupyter.org
The 3.0 release of JupyterLab brings many new features to users and substantial improvements to the extension distribution system.
14
414
1K
This is probably the most humbling & awe-inspiring image of the #Universe I know. It's an infrared image by the #Spitzer #Space #telescope, as wide as the full moon. The dots aren't stars. They're galaxies. Each dot is 100s of Billions of stars.(1/n) @Todd_Scheve #astronomy
37
725
2K
"A vegan diet is probably the single biggest way to reduce your impact on planet Earth, not just greenhouse gases, but global acidification, eutrophication, land use and water use,..., far bigger than cutting down on your flights or buying an electric car"
theguardian.com
Biggest analysis to date reveals huge footprint of livestock - it provides just 18% of calories but takes up 83% of farmland
102
430
2K
We wrote a longer version of the @huggingface🤗transformers paper (EMNLP demos). It goes through the library and model hub. Lot has happened in the last 9 months! Paper: https://t.co/Rm0JvERveT Consider citing (not linking) in your next paper: https://t.co/91N1JCOXTU
1
121
506
I'm beyond stoked to launch the v2 of the @huggingface model hub today 🔥 Each of our 2,000 models now has an inference widget that lets you try it (text-classification, token-classification, translation, etc.) directly from the model page. It's all powered by the community 💖
4
99
398
A surprising result: We found that smooth activation functions are better than ReLU for adversarial training and can lead to substantial improvements in adversarial robustness. https://t.co/qehDOtrLI6
20
245
1K
Check out our new work "Labelling unlabelled videos from scratch with multi-modal self-supervision" by @y_m_asano @mandelapatrick_ @chrirupp and Andrea Vedaldi in colab with @facebookai! See below for our automatically discovered clusters on VGG-Sound! https://t.co/JJ1XtevlRg
1
10
39
Unsupervised Translation of Programming Languages. Feed a model with Python, C++, and Java source code from GitHub, and it automatically learns to translate between the 3 languages in a fully unsupervised way. https://t.co/FpUL886KS7 with @MaLachaux @b_roziere @LowikChanussot
51
976
3K
Long-range sequence modeling meets 🤗 transformers! We are happy to officially release Reformer, a transformer that can process sequences as long as 500.000 tokens from @GoogleAI. Thanks a million, Nikita Kitaev and @lukaszkaiser! Try it out here: https://t.co/GwvMrt9lYk
7
249
972
My face is embedded in this image. When it is blurred, e.g. seen from a distant or without a pair of glasses, my face is discernible. I have learned by myself how to make this type of illusion, but I do not know whom I should cite for this effect. Does anyone have information?
81
262
2K
T5 is now officially included in 🤗Transformers v2.7.0 thanks to our joint work with @colinraffel & @PatrickPlaten A powerful encoder-decoder by @GoogleAI which natively handles many NLP tasks as text-to-text tasks Just ask it to "Translate" or "Summarize" and enjoy the result!
12
159
661
🔥 Google AI weights, directly inside @huggingface transformers 🔥 https://t.co/D7nX5rnZSX
https://t.co/Fi3sbDF2mW
Efficient mini-BERT models from Google Research, now available at https://t.co/FfHBNs1N7k thanks to @iuliaturc / @GoogleAI ! 24 sizes pre-trained directly with MLM loss and are competitive to more elaborate pre-training strategies involving distillation ( https://t.co/AAJK3WNZsQ).
1
13
90
This is a recently discovered illusion, and it’s really quite striking. The strange effect is called the ‘curvature blindness’ illusion https://t.co/O3FdChvac3
5
278
969
FixMatch: focusing on simplicity for semi-supervised learning and improving state of the art (CIFAR 94.9% with 250 labels, 88.6% with 40). https://t.co/QuP6oN7iCS Collaboration with Kihyuk Sohn, @chunliang_tw @ZizhaoZhang Nicholas Carlini @ekindogus @Han_Zhang_ @colinraffel
5
234
853