FaceApp Has Users Wondering About More than Their Future Looks

A hand holding a smart phone that displays the FaceApp
July 24, 2019

Though it was released more than two years ago, FaceApp received 973,000 downloads in just four days this month as people flocked to social media to participate in the #AgeChallenge. The app’s claim to fame is its signature “Old” filter, which uses artificial intelligence to edit a person’s selfie so that they appear 30 years older.

Its recent resurgence has sparked controversies over the security and privacy concerns associated with similar apps, leading people to wonder how they work and what the risks are for users. We asked Peter Mawhorter, instructor in computer science laboratory at Wellesley, for his take on this viral sensation and his thoughts about how worried users should be about privacy issues.

 

Q: People might not usually think about photo editing when they think of artificial intelligence, but FaceApp says it uses AI to age people’s faces. How would you define AI in this context?

Peter Mawhorter: My favorite definition of AI is kind of an oxymoron: AI is building an artificial system (usually a computer program) that does something only a human can do. In other words, by definition it’s impossible, but what that really means is that what we consider AI today won’t be considered AI 10 years from now, and many things that we don’t consider AI today were research subjects in AI 10 or 20 years ago.

In this case, the computer is essentially doing photo editing. A good professional photo editor using Photoshop could spend some time (maybe a few hours, I’m not sure) to use their human imagination and artistic talent to produce a photo that’s their guess of what you’ll look like when you’re older. If that were their full-time job, they’d be pretty good at it. But on the face of it, the creative and artistic parts of that job are things we’ve been taught to believe only humans are good at. So we consider this to be AI because it does something we think only humans should be able to do.

 

Q: How does the technology behind FaceApp’s aging filter actually work?

Mawhorter: There are two parts of this technology. One just recognizes and reproduces patterns. It looks at a few pixels and has a bank of pattern recognizers that recognize those pixels and say “That’s pattern no. 1” or “That’s pattern no. 2.” And then it has a higher level of structures that look at those pattern labels and aggregate them into a larger pattern, and so on. The second part of the tech is a competitive process. The full name of the technique, for those who want to look up more, is a “generative adversarial network,” and the pattern recognition bit is the “network” piece.

Now I don’t know exactly, but I’d bet that the underlying setup of the FaceApp system is this: One network is given the selfie that you just took, along with a picture that’s supposed to be an older version, and is asked to say, true or false, is this older version the correct older version of you? And this part will be trained with millions of real images of the same person when they’re young and old, so it will “know” what a real older image should look like. And then the second part will be given the young image, and asked to produce the old image, and it will be trained in the same way.

But here’s where the “adversarial” part comes in: When the second network produces an image, the first network is asked to judge it, and if the first network can tell that the image is bogus, the first network wins. But if the first network is fooled by the second network’s product and thinks it’s real, then the second network wins. So you make the two networks compete, and by doing that, they both get better.

 

Q: How accurate is the aging filter?

Mawhorter: The notion of “accuracy” is interesting in this case, because if you waited long enough you could try to see if it got things right. Certainly with the “younger” filter you could already check how accurate it is, and since the older filter seems so much more popular, I’m guessing that it’s ultimately not really that accurate on a personal level. In the end, it’s a complete fabrication. The networks have observed millions (or perhaps billions) of examples, so they do have a visual model of aging, but just like a human artist would, they have to make things up to fit your specific face.

Of course, there’s a more sinister side to this. The same kinds of technology can be used, for example, in what are called “deep fakes” to doctor photos or even video. There’s a program out there now that can take a video of someone, like a politician, speaking, and you can just type in what you want them to say, and it will edit the video. So when we’re talking about accuracy, the real question is: Is it accurate enough to fool enough people? It doesn’t have to be perfect, and I think the answer already is, it’s good enough to fool enough people to do some serious damage. Of course, these are all things that a professional with enough time on their hands could do, but having it be an automatic and near-instantaneous process changes things.

 

Q: Should people be wary of sending their likeness away to apps like FaceApp?

Mawhorter: Yes. The slightly longer answer is: The company behind FaceApp claims that it currently does not store or monetize your selfies, even though for it to work you have to upload them to the cloud, and their privacy policy explicitly allows them to share your personal information, including these images, with their “affiliates.” Even if we believe their statement today (which isn’t an entirely unreasonable stance), things could change at any moment, and the association of your selfie with even just your email or phone number (if you are logged in to the app) is potentially lucrative.

 

Q: Why are these kinds of apps so appealing, and does the tech industry actually understand that?

Mawhorter: It’s appealing because it’s “magic.” It seems to alter the rules of reality to give us a glimpse into the future. It’s fun to think about what we might look like decades from now, and even more fun to do that in a social context. Does FaceApp understand that? I’m betting it didn’t before it launched the app, but it sort of does now. And the tech industry more broadly? No, not really.

The tech startup industry works very much on a gambling model. Investors give huge amounts of money to thousands of startups that each promise their own market revolutions, and 99 percent of them fail, but 1 percent of them return more than 100 times the profit, so the investors can profit. The collateral damage of all of this economic gambling, and the idea that the bubble might at some point pop, isn’t usually considered.