@osazuwa
@BarackObama
@AOC
What is surprising is that StyleGAN is trained on FFHQ, which is supposed to be a much more diverse face dataset alternative to CelebA (which is extremely biased to good-looking white people). So either FFHQ is not diverse enough or StyleGAN/Pulse mode collapse hard.
An image of
@BarackObama
getting upsampled into a white guy is floating around because it illustrates racial bias in
#MachineLearning
. Just in case you think it isn't real, it is, I got the code working locally. Here is me, and here is
@AOC
.
@jm_alexia
@osazuwa
@BarackObama
@AOC
I would guess that part of the problem is likely starting optimization from a poor initialization, like the center of the distribution; but, I haven't tried running it yet. :)
@jm_alexia
@osazuwa
@BarackObama
@AOC
StyleGAN may be using FFHQ to generate the faces, but PULSE was apparently trained on CelebA-HQ to do the correlation of the downsampled imaging.
This still seems to be an issue with training data.
@jm_alexia
@osazuwa
@BarackObama
@AOC
FFHQ is not very diverse. It's way better than CELEB-A but ... well, here's a screengrab of a few dozen images generated from the FFHQ model.
@jm_alexia
@osazuwa
@BarackObama
@AOC
I'm not sure I follow, doesn't the paper say:
We tried comparing with supervised methods trained on FFHQ, but they failed to generalize and yielded very blurry and distorted results when evaluated on CelebA HQ; 1/2
@jm_alexia
@osazuwa
@BarackObama
@AOC
Why not use a training dataset consisting exclusively of PoC and black people and then check the outcome. This way you can exclude any bias in the algorithm- or confirm it. You could even plot color grade in outcome over percentages in input data
@jm_alexia
@osazuwa
@BarackObama
@AOC
That's interesting! I immediately assumed it was a data bias problem. But maybe it's still not diverse enough?
Either way I don't like the purpose of the model. Removing anonymity seems like a really bad outcome of this even if it worked well