Deep-learning machines can make faces look older but often lose their identity in the process. Now computer scientists have solved this problem.
The way we age is deeply fascinating. Indeed, knowing how our faces will look in 20, 30, or 40 years’ time is a trick that many would find captivating.
A number of techniques exist that can do this. But they are time-consuming and hence expensive. So a cheap and quick way to age faces in photographs would be a handy trick.
Enter Grigory Antipov from Orange Labs in France and a couple of pals who have developed a deep-learning machine that can do the job with ease. Not only can their system make young faces look older, it can make older faces look younger.
A couple of recent developments have made their task easier. In recent years, computer scientists have built deep-learning machines that are able to modify faces in various different but realistic ways. This approach can create realistic synthetic faces that look older.
However, there is a problem. In making faces look older, these deep-learning machines often lose the person’s identity in the process. So the individual looks older but can no longer be identified.
Antipov and co have come up with a way to solve that problem. Their approach involves two deep-learning machines that work together—a face generator and a face discriminator. Both machines learn what faces look like as they age by analyzing photographs of people in the age groups 0-18, 19- 29, 30-39, 40-49, 50-59, and 60+ years old.
In total, the machines were trained on 5,000 faces in each group taken from the Internet Movie Database and from Wikipedia and then labeled with the person’s age. In this way, the machine learns the characteristic signature of faces in each age group. It is this abstract signature that the face generator can then apply to other faces to make them look the same age.
However, applying this signature can sometimes cause a person’s identity to be lost. So the second deep-learning machine—the face discriminator—looks at the synthetically aged face to see whether the original identity can still be picked out. If it can’t, the image is rejected.
Antipov and co call their process Age Conditional Generative Adversarial Network—adversarial because the deep-learning machines work in opposition.
The results make for impressive reading. The team applied the technique to 10,000 faces from the IMDB-Wikipedia database that they hadn’t used for training. They then tested the before and after images using software called OpenFace which can tell whether two images show the same person or not. This spotted the same face more than 80 percent of the time, compared to about 50 percent of the time for other face-aging techniques.
And, of course, the technique not only ages young faces but creates younger versions of older faces, too.
There is an obvious test the team has not done. Presumably, it’s possible to compare faces that have been made younger synthetically with pictures of the same face taken when the individual was actually younger. That would be a good test of how accurate the technique is and perhaps a task for the future.
Antipov and co say their technique could be used in applications such as helping identify people who have been missing for many years. It might also be a lot of fun to play with, should they choose to make their algorithm public.
Ref: arxiv.org/abs/1702.01983: Face Aging with Conditional Generative Adversarial Networks