A. Dehghan, E.G. Ortiz, R. Villegas, and M. Shah. "Who Do I Look Like? Determining Parent-Offspring Resemblance via Gated Autoencoders". IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014. [pdf]
Who's Your Daddy?
In this project, our goal is to bridge computer vision research with findings in anthropological studies to answer several key questions:
- Do offspring resemble their parents?
- Do offspring resemble one parent more than the other?
- What parts of the face are more genetic?
- Do anthropologies' studies help learn better features?
To answer these questions and address the problem of parent-offspring resemblance we propose an algorithm that fuses the features and metrics discovered via gated autoencoders with a discriminative neural network layer that learns the optimal, or what we call genetic, features for the task.
In the core of our framework lies a gated autoencoder, which describes the relationship between a pair of images. The inpusts to the gated autoencoder are patches extracted from a pair of parent-offspring images (after alignment), as shown in the figure. The transformation between patches are encoded in the new variable called z, and the corresponding filters are learnt by minimizing the following loss function.
We further add a discriminative layer on the top of gated autoencoder to distinguish between related and non-related pair of images. The loss function for the discriminative layer is shown below, in which we try to minimize the error between predicted labels and groundtruth labels.
In our experimentation, we look at metric learning methods that only learn a metric whereas we learn both an optimal feature mapping and metric for parent-offspring resemblance. We also looked at three setups, 1) Generative - which only uses the gated autencoder without metric layer, 2) Anthropological - which uses weighting based on the most genetically meaningful features from anthropological studies, and 3) Discriminative - which learns both the best feature mapping and metric automatically. As shown in the table, we found that our discriminative method outperforms anthropological weights by ~2% and the best performing metric learner by ~3% on the popular kinship verification dataset KinFaceW-I.
Our method, however, is most ideally suited for the task of face identification, where we do not want to simply tell whether two images are of the same family, but find an offsprings match in a large pool of parents. For this task, we use the Family 101 dataset as shown below. In our experimentation, we found that our method performs better than chance on parent-offspring for most relationships at low rank, but the father-son relationship does slightly worse as the rank increases.