Every Time You Refresh This Website, It Shows AI-Generated Faces Of People That Do Not Exist
ThisPersonDoesNotExist.com is the simplest of websites. It¡¯s a single page, completely devoid of text or atwork, save a single closeup of a person¡¯s face dead centre. Each time you refresh, the mug changes. But there¡¯s something slightly wrong...
ThisPersonDoesNotExist.com is the simplest of websites. It's a single page, completely devoid of text or atwork, save a single closeup of a person's face dead centre.
Each time you refresh, the mug changes. But there's something slightly wrong with their faces...
These people are all fake
ALSO READ: None Of These People Exist In Real Life, They Are All Made By An Artificial Intelligence Tool
If you look closer, some of the photos have a patch of swirls, or a misshapen feature, or something else. That's because none of the people featured on this site are real. They're all faces of imaginary people, generated by an artificial intelligence.
The person who built the site popped up on a hacker forum to explain what's going on. These photos are the results of a Generative Adversarial Network (GAN). It's basically when two pieces of an AI compete when creating a realistic imitation of something. One creates while the other tries to find flaws, which serves to speed up the rate at which the AI as a whole learns.
In this case the GAN at play was built and released by researchers at Intel, after which the creator of this website trained it on photos of people's faces. It's not stopped there though, with others also trying it out on images of anime characters, classical paintings, and more.
Which of this Obama image is real?
ALSO READ: AI Creates Fake Obama, Digitally Lip Syncs Him Talking About Stuff With Scary Level Of Accuracy
Sure, the results aren't perfect, many of them have strange flaws, or even stranger colours and streaks across them. But more than a few are uncannily human, where you'd have to know it was fake and take the time to look before spotting a flaw.
Which is part of why AI researchers are wary of their developments being abused by malicious actors. After all, think of all the fake headlines you can support when you have an AI capable of whipping up faces of non-existent people in photos to go along with them?