Ian Goodfellow Profile Banner
Ian Goodfellow Profile
Ian Goodfellow

@goodfellow_ian

Followers
300,071
Following
1,116
Media
138
Statuses
2,764

Research Scientist at DeepMind. Opinions my own. Inventor of GANs. Lead author of

San Francisco, CA
Joined September 2016
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
@goodfellow_ian
Ian Goodfellow
2 years
I'm excited to announce that I've joined DeepMind! I'll be a research scientist in @OriolVinyalsML 's Deep Learning team.
152
246
7K
@goodfellow_ian
Ian Goodfellow
6 years
One of my favorite samples from the Progressive GANs paper is this one from the "cat" category. Apparently some of the cat training photos were memes with text. The GAN doesn't know what text is so it has made up new text-like imagery in the right place for a meme caption.
Tweet media one
62
3K
6K
@goodfellow_ian
Ian Goodfellow
6 years
I never heard back from MIT. I got rejected from CMU. I was accepted to U of T but not to work with the PI I wanted there. I got "honorable mention" for NSF GRFP but not actual money. Don't let temporary failures discourage you.
46
915
4K
@goodfellow_ian
Ian Goodfellow
5 years
4.5 years of GAN progress on face generation.
Tweet media one
36
1K
4K
@goodfellow_ian
Ian Goodfellow
7 years
CycleGAN turning a horse video into a zebra video ( )
38
2K
3K
@goodfellow_ian
Ian Goodfellow
6 years
4 years of GAN progress (source: )
Tweet media one
29
964
3K
@goodfellow_ian
Ian Goodfellow
4 years
This exciting new PyTorch library includes quite a lot of the GANs I've featured in my talks over the past few years, all in one place!
10
433
2K
@goodfellow_ian
Ian Goodfellow
5 years
OctConv is a simple replacement for the traditional convolution operation that gets better accuracy with fewer FLOPs
Tweet media one
16
572
2K
@goodfellow_ian
Ian Goodfellow
5 years
I’m in Fortune’s 40 under 40:
97
105
2K
@goodfellow_ian
Ian Goodfellow
6 years
ML researchers, reviewers, and press coverage of ML need to get a lot more serious about statistically robustness of results and the effect of hyperparameters. This study shows that many papers over the last year or so were just observing sampling error, not true improvement.
@karol_kurach
Karol Kurach
6 years
Are GANs Created Equal? A Large-Scale Study
3
67
193
33
655
2K
@goodfellow_ian
Ian Goodfellow
6 years
By looking at this image, you can see how sensitive your own eyes are to contrast at different frequencies (taller apparent peaks=more sensitivity at that frequency). It's like a graph that is made by perceiving the graph itself. h/t @catherineols
Tweet media one
25
529
1K
@goodfellow_ian
Ian Goodfellow
5 years
Whoa! It turns out that famous examples of NLP systems succeeding and failing were very misleading. “Man is to king as woman is to queen” only works if the model is hardcoded not to be able to say “king” for the last word.
@rikvannoord
Rik van Noord
5 years
1/7 Do word embeddings really say that man is to doctor as woman is to nurse? Apparently not. Check out this thread for a description of a short paper I co-wrote with Malvina Nissim and Rob van der Goot, available here: #NLProc #bias
24
403
1K
29
386
1K
@goodfellow_ian
Ian Goodfellow
7 years
GANs for generating images of how clothes will fit. Only two of these images are photos.
Tweet media one
19
606
1K
@goodfellow_ian
Ian Goodfellow
5 years
ML Twitter, what are your favorite papers / other resources about the class imbalance problem?
65
216
1K
@goodfellow_ian
Ian Goodfellow
6 years
GANs can be used to automatically design dental crowns, that are then actually manufactured and used in the physical world. Crowns need to be made specifically for each patient and need to fit correctly with the other teeth and support biting and chewing.
Tweet media one
9
393
1K
@goodfellow_ian
Ian Goodfellow
4 years
Apple now has an AI/ML residency program! I'm looking forward to working with our first class of residents.
@MrRennaker
Michael Rennaker
4 years
Thrilled to announce a new program designed to help experts in applied fields build ML-powered products and experiences. Introducing the AI/ML residency program:
18
208
764
11
172
1K
@goodfellow_ian
Ian Goodfellow
5 years
Philip Wang at Uber set up to show a new imaGANary person every time you refresh the page
49
430
1K
@goodfellow_ian
Ian Goodfellow
6 years
Adversarial examples that fool both human and computer vision
Tweet media one
36
431
1K
@goodfellow_ian
Ian Goodfellow
6 years
Tweet media one
27
132
1K
@goodfellow_ian
Ian Goodfellow
6 years
Two years of GAN progress on class-conditional ImageNet-128
Tweet media one
15
350
1K
@goodfellow_ian
Ian Goodfellow
5 years
An exciting property of style-based generators is that they have learned to do 3D viewpoint rotations around objects like cars. These kinds of meaningful latent interpolations show that the model has learned about the structure of the world.
8
356
1K
@goodfellow_ian
Ian Goodfellow
4 years
The quiet semisupervised revolution continues
@D_Berthelot_ML
David Berthelot ([email protected])
4 years
FixMatch: focusing on simplicity for semi-supervised learning and improving state of the art (CIFAR 94.9% with 250 labels, 88.6% with 40). Collaboration with Kihyuk Sohn, @chunliang_tw @ZizhaoZhang Nicholas Carlini @ekindogus @Han_Zhang_ @colinraffel
Tweet media one
5
243
883
8
213
1K
@goodfellow_ian
Ian Goodfellow
5 years
These style-based generator results look great:
Tweet media one
30
322
1K
@goodfellow_ian
Ian Goodfellow
6 years
Yoshua, Aaron, and I have released the LaTeX template for the Deep Learning book: Useful if you want to follow the same math notation conventions as we do or if you want to put a notation page in your document
8
267
1K
@goodfellow_ian
Ian Goodfellow
6 years
This new family of GAN loss functions looks promising! I'm especially excited about Fig 4-6, where we see that the new loss results in much faster learning during the first several iterations of training. I implemented the RSGAN loss on a toy problem and it worked well.
@jm_alexia
Alexia Jolicoeur-Martineau
6 years
My new paper is out! " The relativistic discriminator: a key element missing from standard GAN" explains how most GANs are missing a key ingredient which makes them so much better and much more stable! #Deeplearning #AI
13
284
949
7
261
1K
@goodfellow_ian
Ian Goodfellow
5 years
I’m heading to Uruguay next month to teach about generative models at
30
83
994
@goodfellow_ian
Ian Goodfellow
6 years
29
276
979
@goodfellow_ian
Ian Goodfellow
1 year
Thank you to the many people who reached out after my now-deleted tweet last week asking for help with an urgent problem. For everyone still concerned, things are under control now.
38
15
939
@goodfellow_ian
Ian Goodfellow
1 year
I've spent several years studying machine learning security with the goal of making ML reliable before it is used in more and more important contexts. Unfortunately, ML capabilities and adoption are growing much faster than ML robustness.
89
164
896
@goodfellow_ian
Ian Goodfellow
6 years
Forbes listed GANs as one of the best tech innovations of the last three years:
18
193
922
@goodfellow_ian
Ian Goodfellow
6 years
ML paper writing pro-tip: you can download the raw source of any arxiv paper. Click on the "Other formats" link, then click "Download source". This gets you a .tar.gz with all the .tex files, all the image files for the figures in their original resolution, etc.
Tweet media one
Tweet media two
19
281
908
@goodfellow_ian
Ian Goodfellow
6 years
A quick thread on two of my favorite theory hacks for machine learning research
5
253
879
@goodfellow_ian
Ian Goodfellow
6 years
While GANs have been great at generating realistic images from a single category (one GAN for faces, another GAN for buildings) they've always struggled to fit all 1,000 classes of ImageNet with a single GAN. This ICLR submission has done it: -
Tweet media one
29
346
853
@goodfellow_ian
Ian Goodfellow
6 years
Self-attention for GANs. No more problems with losing track of how many faces the generator has drawn on the dog.
@gstsdn
augustus odena
6 years
New preprint () by Han Zhang, with @goodfellow_ian and Dimitris Metaxas. Substantially improves the state-of-the-art on the conditional Imagenet synthesis task.
Tweet media one
Tweet media two
7
157
518
13
241
784
@goodfellow_ian
Ian Goodfellow
3 years
This is really cool. Some of my PhD labmates worked on ML for compression back in the pretraining era, and I remember it being really hard to get a compression advantage.
@liu_mingyu
Ming-Yu Liu
3 years
Check out our new work on face-vid2vid, a neural talking-head model for video conferencing that is 10x more bandwidth efficient than H264 arxiv project video @tcwang0509 @arunmallya #GAN
14
151
747
19
100
766
@goodfellow_ian
Ian Goodfellow
6 years
One of my main concerns about machine learning interpretability tools is that they will make people think they understand ML when they don't. People seem to think linear models are interpretable, but no one looks at them and has the intuition that they have adversarial examples
19
226
761
@goodfellow_ian
Ian Goodfellow
5 years
NVIDIA gave me a new T-Rex, signed by Jensen. They are not even for sale yet! Thanks NVIDIA! The pic is with GAN extraordinaire Ming-Yu at the NVIDIA reception last night.
@liu_mingyu
Ming-Yu Liu
5 years
May the power of #TitanRTX with you @goodfellow_ian
Tweet media one
2
12
227
16
49
762
@goodfellow_ian
Ian Goodfellow
6 years
To gain some idea of the far future of ML security, we studied a simple toy problem called "adversarial spheres," simulating a future where advanced ML models are extremely accurate. We find that even then, an adversary can still easily fool them.
14
259
745
@goodfellow_ian
Ian Goodfellow
6 years
An updated 2 year progress pic for ImageNet GANs. New pic by @gstsdn includes latest results by @ajmooch et al.
Tweet media one
7
211
720
@goodfellow_ian
Ian Goodfellow
5 years
If you’re upset that someone didn’t cite your paper, I strongly recommend contacting the authors privately before making a public complaint. Thread:
17
75
722
@goodfellow_ian
Ian Goodfellow
6 years
A math trick I like a lot is the approach to taking derivatives using hyperreal numbers. Thread:
15
169
714
@goodfellow_ian
Ian Goodfellow
4 years
Updating some slides from last year. “>2,000 papers later, still not really solved” -> “>5,000 papers, still not really solved”
10
47
700
@goodfellow_ian
Ian Goodfellow
6 years
Hadrien Jean has made a series of notes on linear algebra, following the deep learning textbook:
7
241
689
@goodfellow_ian
Ian Goodfellow
6 years
TensorFuzz automates the process of finding inputs that cause some specific testable behavior, like disagreement between float16 and float32 implementations of a neural network
@goodfellow_ian
Ian Goodfellow
6 years
Neural networks are notoriously hard to debug. @gstsdn has developed a new debugging methodology by adapting traditional coverage guided fuzzing techniques to neural networks.
Tweet media one
3
163
505
6
232
686
@goodfellow_ian
Ian Goodfellow
6 years
#CVPR2018 I will teach an Introduction to GANs at 8:45 AM in Room 150-ABC at the Perception Beyond the Visible Spectrum workshop. Slides available at
Tweet media one
6
166
682
@goodfellow_ian
Ian Goodfellow
4 years
When I invented adversarial training as a defense against adversarial examples, I focused on making it as cheap and scalable as possible. Eric and collaborators have now upgraded the original cheap version to compete with newer, more expensive versions.
@RICEric22
Eric Wong @ ICLR 24
4 years
1/ New paper on an old topic: turns out, FGSM works as well as PGD for adversarial training!* *Just avoid catastrophic overfitting, as seen in picture Paper: Code: Joint work with @_leslierice and @zicokolter to be at #ICLR2020
Tweet media one
3
60
238
10
114
671
@goodfellow_ian
Ian Goodfellow
6 years
GANs for imitating dance moves
6
205
667
@goodfellow_ian
Ian Goodfellow
6 years
The term “deep learning” reminds me of “horseless carriage.” It made sense when introduced, but now that it is the dominant paradigm, it feels quaint to specify that there is no horse. The horse here is of course the shallow model / convex cost constraint.
19
125
653
@goodfellow_ian
Ian Goodfellow
3 years
The Self-Organizing Conference on Machine Learning is returning as a 100% online event for 2020. Nov 30-Dec 4. It will still be small to maintain the group discussion feel. Apply at
18
113
642
@goodfellow_ian
Ian Goodfellow
6 years
Thread on how to review papers about generic improvements to GANs
9
225
643
@goodfellow_ian
Ian Goodfellow
7 years
TensorFlow launches Easy Mode... I mean, Eager Mode
7
231
643
@goodfellow_ian
Ian Goodfellow
7 years
Google Brain Residency has been upgraded to Google AI Residency. Now possible to work with more AI teams at Google.
4
184
634
@goodfellow_ian
Ian Goodfellow
5 years
My team at Apple is hiring in Zurich:
21
131
632
@goodfellow_ian
Ian Goodfellow
5 years
@doomie gmail classifies my emails to myself as not important
10
19
630
@goodfellow_ian
Ian Goodfellow
7 years
CycleGAN learns to turn horses into zebras *without supervision*:
Tweet media one
12
289
610
@goodfellow_ian
Ian Goodfellow
6 years
Train ImageNet in 18 minutes for just $40. By my former colleague @yaroslavvb
1
139
598
@goodfellow_ian
Ian Goodfellow
5 years
“Be careful what you wish for”
@MIT_CSAIL
MIT CSAIL
5 years
Describe programming in only six words. We’ll RT all the best ones. Ours: Turning ideas and caffeine into code. #ProgrammingIn6Words #wednesdaywisdom
2K
376
1K
11
60
580
@goodfellow_ian
Ian Goodfellow
5 years
The "assistant professor" title seems especially galling to me: who exactly is the assistant professor assisting? They do the full job.
@goodfellow_ian
Ian Goodfellow
5 years
Academic titles tend to begin with negative adjectives and gradually remove adjectives. "Undergraduate student" -> "graduate student", "assistant professor" -> "associate professor" -> "full professor".
17
27
259
22
55
579
@goodfellow_ian
Ian Goodfellow
6 years
@glagolista GANs n' Roses?
12
25
585
@goodfellow_ian
Ian Goodfellow
6 years
GANs for generating Mario levels!
Tweet media one
7
160
534
@goodfellow_ian
Ian Goodfellow
6 years
I think changing the name of NIPS is the right thing to do. The majority of women in the poll voted for it, and moral leadership shouldn’t be driven by polls anyway.
14
73
546
@goodfellow_ian
Ian Goodfellow
6 years
This paper shows how to make adversarial examples with GANs. No need for a norm ball constraint. They look unperturbed to a human observer but break a model trained to resist large perturbations.
Tweet media one
7
180
534
@goodfellow_ian
Ian Goodfellow
7 years
My copy just arrived!
Tweet media one
19
66
532
@goodfellow_ian
Ian Goodfellow
6 years
The definition of "adversarial examples" I prefer these days is "Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake"
15
132
517
@goodfellow_ian
Ian Goodfellow
7 years
NIPS 2017 adversarial examples challenge: Test your defenses against others' adversarial examples and vice versa!
9
215
530
@goodfellow_ian
Ian Goodfellow
4 years
David has released a new paper from an old collaboration. Glad to see it out!
@D_Berthelot_ML
David Berthelot ([email protected])
4 years
Latent Adversarial Generator Code is Out! Code: Arxiv: @docmilanfar @goodfellow_ian
Tweet media one
11
158
724
2
102
514
@goodfellow_ian
Ian Goodfellow
6 years
Check out Adversarial Logit Pairing, the new state of the art defense against adversarial examples on ImageNet, by @harinidkannan @alexey2004 and I:
Tweet media one
3
184
524
@goodfellow_ian
Ian Goodfellow
6 years
I'll present a talk called "Defense Against the Dark Arts" summarizing the state of the art and key research challenges for defenses against adversarial examples. Room 259, 1:30 PM.
Tweet media one
9
153
517
@goodfellow_ian
Ian Goodfellow
6 years
Deep learning for predicting aftershocks of large earthquakes. Besides offering better predictions, interpretations of the model suggest promising directions for new physical theories
Tweet media one
4
170
509
@goodfellow_ian
Ian Goodfellow
5 years
6
138
502
@goodfellow_ian
Ian Goodfellow
6 years
It’s strange to see people defining deep learning as supervised learning via backprop, considering that the 2006 deep learning revolution was originally based on the idea that neither of those things work very well
10
112
496
@goodfellow_ian
Ian Goodfellow
5 years
I originally thought of GANs as an unsupervised learning algorithm, but so far, to create recognizable object categories, they've needed a supervision signal / labeled images. This new work shows how to get them to work well with few labels.
Tweet media one
@MarioLucic_
Mario Lucic
5 years
How to train SOTA high-fidelity conditional GANs usin 10x fewer labels? Using self-supervision and semi-supervision! Check out our latest work at @GoogleAI @ETHZurich @TheMarvinRitter @mtschannen @XiaohuaZhai @OlivierBachem @sylvain_gelly
1
69
243
3
138
502
@goodfellow_ian
Ian Goodfellow
6 years
Neural networks are notoriously hard to debug. @gstsdn has developed a new debugging methodology by adapting traditional coverage guided fuzzing techniques to neural networks.
Tweet media one
3
163
505
@goodfellow_ian
Ian Goodfellow
7 years
The arxiv of the future must have comments and open peer review:
17
176
501
@goodfellow_ian
Ian Goodfellow
4 years
Colin was a senior research scientist in my team at Google. He's done great technical work, especially on attention models and semi-supervised / transfer learning, and has been an excellent mentor for many Brain residents / interns. Will definitely be a great PhD advisor.
@colinraffel
Colin Raffel
4 years
I'm starting a professorship in the CS department at UNC in fall 2020 (!!) and am hiring students! If you're interested in doing a PhD @unccs please get in touch. More info here:
82
146
893
2
34
498
@goodfellow_ian
Ian Goodfellow
5 years
Updates about SOCML: 1) I have failed to run a SOCML 2019 2) I’m not quitting, just having a busy year. I intend to run SOCML 2020 and beyond 3) We’re experimenting with a distributed SOCMLx program. See link for details
39
52
491
@goodfellow_ian
Ian Goodfellow
6 years
StarGAN: learning one model that translates between *multiple* domains without supervision (previous works were about translating between two domains without supervision)
@DmitryUlyanovML
Dmitry Ulyanov
6 years
Wow, StarGAN results look good! arXiv: github:
Tweet media one
5
210
486
0
186
483
@goodfellow_ian
Ian Goodfellow
6 years
#CVPR2018 Check out the all-day tutorial on GANs tomorrow: I'll speak at 9AM giving an introduction to GANs. Many great speakers throughout the day!
Tweet media one
9
138
473
@goodfellow_ian
Ian Goodfellow
5 years
Interpretation of a machine learning model by a human involves both the model and the human. Human misconceptions can cause as much trouble as any property of the model.
@pmddomingos
Pedro Domingos
5 years
Remarkable finding: people don't trust transparent models any more than opaque ones, and have more difficulty detecting large errors in transparent ones:
17
229
559
6
132
473
@goodfellow_ian
Ian Goodfellow
4 years
My team is hiring ML/RL infra engineers:
12
87
457
@goodfellow_ian
Ian Goodfellow
6 years
Interested in jump-starting your career in machine learning research? Consider the Google AI Residency Program! Applications are now open until January 28th, 2019! Check out for more information.
Tweet media one
5
172
462
@goodfellow_ian
Ian Goodfellow
7 years
The Chinese translation is now ready to go!
Tweet media one
10
74
460
@goodfellow_ian
Ian Goodfellow
5 years
It’s interesting to see the pendulum swing back to representation learning. During my PhD, most of my collaborators and I were primarily interested in representation learning as a biproduct of sample generation, not sample generation itself.
@poolio
Ben Poole
5 years
BigBiGAN shows that "progress in image generation quality translates to substantially improved representation learning performance." Competitive w/self-supervised approaches on ImageNet. The cycle from generative models to other methods and back again continues.
3
66
298
5
63
446
@goodfellow_ian
Ian Goodfellow
6 years
Check out @fermatslibrary 's Librarian, a Chrome extension that automatically shows comments for ArXiv papers: I've asked for this feature for a long time!
5
162
442
@goodfellow_ian
Ian Goodfellow
4 years
In our previous collaboration's I've benefited a lot from using David's machine learning frameworks. If you're a researcher or student looking for both simplicity and customizability, definitely check out Objax.
@D_Berthelot_ML
David Berthelot ([email protected])
4 years
I’d like to share my new project: Objax, a new high-level JAX API with a PyTorch-like interface! Objax pursues the quest for the simplest design and code that’s as easy as possible to extend without sacrificing performance. 1/7
Tweet media one
10
135
555
3
71
435
@goodfellow_ian
Ian Goodfellow
6 years
The discriminator often knows something about the data distribution that the generator didn't manage to capture. By using rejection sampling, it's possible to knock out a lot of bad samples.
@gstsdn
augustus odena
6 years
New preprint by @smnh_azadi , @catherineols , Trevor Darrell, @goodfellow_ian , and me: . We perform rejection sampling on a trained GAN generator using a GAN discriminator. This helps quite a lot for not-much effort.
Tweet media one
Tweet media two
1
65
231
2
104
438