@jm_alexia
Alexia Jolicoeur-Martineau
4 years
@osazuwa @BarackObama @AOC What is surprising is that StyleGAN is trained on FFHQ, which is supposed to be a much more diverse face dataset alternative to CelebA (which is extremely biased to good-looking white people). So either FFHQ is not diverse enough or StyleGAN/Pulse mode collapse hard.
9
25
352

Replies

@osazuwa
🔥囧Robert Osazuwa Ness囧🔥
4 years
An image of @BarackObama getting upsampled into a white guy is floating around because it illustrates racial bias in #MachineLearning . Just in case you think it isn't real, it is, I got the code working locally. Here is me, and here is @AOC .
Tweet media one
204
2K
6K
@osazuwa
🔥囧Robert Osazuwa Ness囧🔥
4 years
@jm_alexia @BarackObama @AOC This is a good point. It is not clear whether simply having more diversity in the data set would solve the issue.
7
7
197
@pbaylies
Peter Baylies
4 years
@jm_alexia @osazuwa @BarackObama @AOC I would guess that part of the problem is likely starting optimization from a poor initialization, like the center of the distribution; but, I haven't tried running it yet. :)
0
0
5
@TrainOfError
TrainOfError
4 years
@jm_alexia @osazuwa @BarackObama @AOC StyleGAN may be using FFHQ to generate the faces, but PULSE was apparently trained on CelebA-HQ to do the correlation of the downsampled imaging. This still seems to be an issue with training data.
2
2
18
@jurph
Last One Out, Please Get the Lights
4 years
@jm_alexia @osazuwa @BarackObama @AOC FFHQ is not very diverse. It's way better than CELEB-A but ... well, here's a screengrab of a few dozen images generated from the FFHQ model.
Tweet media one
0
0
9
@vatai
Emil Vatai @[email protected]
4 years
@jm_alexia @osazuwa @BarackObama @AOC I'm not sure I follow, doesn't the paper say: We tried comparing with supervised methods trained on FFHQ, but they failed to generalize and yielded very blurry and distorted results when evaluated on CelebA HQ; 1/2
1
0
1
@MarkusWerle
Enola Mastodon 💧 (Clearly Specified Parody) 㡹
4 years
@jm_alexia @osazuwa @BarackObama @AOC Why not use a training dataset consisting exclusively of PoC and black people and then check the outcome. This way you can exclude any bias in the algorithm- or confirm it. You could even plot color grade in outcome over percentages in input data
1
0
3
@etiene_d
Etiene Dalcol
4 years
@jm_alexia @osazuwa @BarackObama @AOC That's interesting! I immediately assumed it was a data bias problem. But maybe it's still not diverse enough? Either way I don't like the purpose of the model. Removing anonymity seems like a really bad outcome of this even if it worked well
0
0
2
@spillteori
Henrik Skaug Sætra
4 years
@jm_alexia @osazuwa @BarackObama @AOC What happens when one tries a dark image of a white person?
0
0
0
@233_k6
K6 233
4 years
@jm_alexia @doctorow @osazuwa @BarackObama @AOC Could the model even be latching onto contrast differences in the pics?
0
0
0