Biophysicist Dr. Mike Tyka recently revealed on his blog that he’s been experimenting with generative adversarial networks (GANs) in order to generate portraits of various different human faces. Unlike previous methods that used convolutional neural networks, like DeepDream, Tyka relied on algorithms used in unsupervised machine learning. By having a detailed understanding of what a face is, what makes up a face, and where each feature is placed, the network then begins generating images that look not too dissimilar from an actual human face.
Below are several images which Tyka was able to generate using GANs (if you’d like to see more, click here):
Photo(s) Credit: Mike Tyka
“For a while now I’ve been experimenting with ways to use generative neural nets to make portraits. Early experiments were based on deepdream-like approaches using backprop to the image but lately I’ve focused on GANs.”
– Mike Tyka
And this won’t be the last that we see GANs being used to help generate even more complex images using data from the physical world. As research continues, the power of AI to conceptualize and produce realistic results of how it perceives our world will make the images shown above look like chicken scratch. What a time to be alive!