Future Tense

Do You Always Blink When a Picture Is Taken? Facebook Wants to Fix Those Photos.

GIF of woman blinking.
Photo illustration by Slate. Photos by Thinkstock.

It’s a common curse: The flash goes off, and you blink, resulting in an image that makes you look ridiculous. But Facebook thinks it doesn’t have to be that way. Researchers there have created an artificial intelligence system that can retouch images of people blinking to replace closed eyes with convincing computer-generated open eyes.

The tool uses a generative adversarial network, or GAN, which is a two-part machine learning system whose dual components compete against each other to try to fool the system into thinking its computer-generated images are real. Basically, one network is trained to recognize and classify whether images of open eyes are real, and the other network is trained to generate convincing images of open eyes. The generator network tries to create more and more convincing images to fool the classifier network, and the classifier network tries to get better at spotting these sophisticated fakes. Over time, each part of the GAN refines its skills and creates more accurate retouched images, as the network gets better at determining what realistic open eyes look like and how to successfully generate such images.

This technique is similar to the popular Adobe Photoshop “content-aware fill” tool, which allows users to remove undesirable features—like distracting background objects—in their photos by having the program fill in the space with a decent guess of what it believes should go there. The technique is known as in-painting.

Current AI systems can do a pretty decent job of in-painting facial features based on general images of human faces, but so far they have struggled to do in-painting that closely resembles a specific person. The research paper Facebook published about its new tool explains that, “[g]iven a training set of sufficient size, the network will learn what a human face ‘should’ look like, and will in-paint accordingly.” But most in-painting techniques have been unable to preserve a specific subject’s identity, which can lead to “undesirable and biased” (by which they presumably mean creepy, unrealistic, or inaccurate) results.

Facebook’s new tool fixes this issue by including exemplary data—that is, it provides its system with example photos of the target person with their eyes open. This allows the GAN to learn what the specific target person’s open eyes look like and provide generated images that accurately reflect the target person’s skin color, eye color, eye shape, and positioning. This means the system can generate personalized images of a person’s eyes, not just retouch a blinking photo with a generic set of eyes.

Facebook’s research paper included many examples of “before and after” retouched images with computer-generated eyes, including photos of Naomi Watts, Stranger Things actress Natalia Dyer, and, in a somewhat perplexing choice, a black-and-white portrait of Gandhi. Interestingly, a lot of the example photos feature Pakistani celebrities, such as Mawra Hocane, Muneeb Butt, Mehreen Syed, Anoushey Ashraf, and Ayesha Omar. These celebrities’ “before” photos all feature pictures of them with their eyes closed as part of Pepsi’s Liter of Light campaign, which helps provide accessible and eco-friendly bottle lights to people in need. For the campaign, celebrities across Pakistan posed with their eyes closed and captioned their photos on Instagram with the hashtag #EyesClosedforLight. (Update, June 20, 2018: According to a Facebook spokesperson, “Images were taken from a variety of public places like Wikipedia pages and Instagram campaigns/hashtags.”)

The results are strikingly realistic and convincing. In testing, respondents more often than not mistook the computer-generated open eyes for real ones or said they couldn’t be sure whether a set of eyes was real or computer-generated.

Facebook’s tool is not perfect, though. It still struggles with color matching in some photos, and it can create weird effects if the blinking eyes are partially covered by glasses or hair. However, the researchers believe that they can overcome both of these issues with more fine-tuning. Perhaps the tool’s biggest technical hurdle will be trying to fix photos when there isn’t good exemplary data to use. It remains to be seen how it will generate convincing open eyes if the system doesn’t have any good example photos of the target person.

This tool has the potential to eliminate a pernicious, frustrating feature common in photos in which someone blinked at exactly the wrong moment, as long as it can move past the uncanny valley. If Facebook’s tool can create hyper-realistic open eyes, perhaps it will become as common as other photo retouching tools, which can remove red eye or smooth over skin blemishes.

Still, the tool may face some negative gut reactions from users based on its creep-factor. For many people, there’s something unsettling about Facebook learning all the nuances of someone’s face to the extent that it can automatically open their eyes in a photo. However, it does seem to offer a quick and easy remedy to a common problem, especially for group photos. It’s the classic Facebook problem: balancing usefulness with creepiness.