{"id":226,"date":"2021-02-17T02:54:31","date_gmt":"2021-02-17T02:54:31","guid":{"rendered":"http:\/\/enriquegortiz.com\/wordpress\/enriquegortiz-staging\/?post_type=fw-portfolio&#038;p=226"},"modified":"2022-12-18T19:06:22","modified_gmt":"2022-12-18T19:06:22","slug":"parent-offspring-resemblance","status":"publish","type":"fw-portfolio","link":"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/project\/parent-offspring-resemblance\/","title":{"rendered":"Parent-Offspring Resemblance"},"content":{"rendered":"<section class=\"fw-main-row \"  >\n\t<div class=\"fw-container\">\n\t\t<div class=\"row\">\n\t\r\n\r\n<div class=\" col-xs-12 col-sm-12 \">\r\n    <div id=\"col_inner_id-69ea5f8b1af0f\" class=\"fw-col-inner\" data-paddings=\"0px 0px 0px 0px\">\r\n\t\t<p><a href=\"http:\/\/crcv.ucf.edu\/people\/phd_students\/afshin\/\">A. Dehghan<\/a>,\u00a0<strong><a title=\"Home\" href=\"http:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/home\/\">E.<\/a><a title=\"Home\" href=\"http:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/home\/\">G. Ortiz<\/a><\/strong>,\u00a0R. Villegas, and <a href=\"http:\/\/crcv.ucf.edu\/people\/faculty\/shah.html\">M. Shah<\/a>. \u00a0<a href=\"http:\/\/crcv.ucf.edu\/people\/phd_students\/afshin\/Afshin_Dehghan_Resemblance_CVPR14.pdf\">\"Who Do I Look Like? Determining Parent-Offspring Resemblance via Gated Autoencoders\"<\/a><em>.<\/em>\u00a0IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014. [<a href=\"http:\/\/crcv.ucf.edu\/people\/phd_students\/afshin\/Afshin_Dehghan_Resemblance_CVPR14.pdf\">pdf<\/a>]<\/p><h1>Who's Your Daddy?<\/h1><h1>Motivation<\/h1><p><img loading=\"lazy\" decoding=\"async\" class=\"alignright size-full wp-image-234\" src=\"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/files\/2021\/02\/fam_teaser.jpg\" alt=\"\" width=\"306\" height=\"260\" srcset=\"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/files\/2021\/02\/fam_teaser.jpg 306w, https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/files\/2021\/02\/fam_teaser-300x255.jpg 300w\" sizes=\"auto, (max-width: 306px) 100vw, 306px\" \/><\/p><p>In this project, our goal is to bridge computer vision research with findings in anthropological studies to answer several key questions:<\/p><ol><li>Do offspring resemble their parents?<\/li><li>Do offspring resemble one parent more than the other?<\/li><li>What parts of the face are more genetic?<\/li><li>Do anthropologies' studies help learn better features?<\/li><\/ol><p>To answer these questions and address the problem of parent-offspring resemblance we propose an algorithm that fuses the features and metrics discovered via gated autoencoders with a discriminative neural network layer that learns the optimal, or what we call genetic, features for the task.<\/p><h1>Gated Autoencoder<\/h1><p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-329 aligncenter\" src=\"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/files\/2021\/02\/gae_diagram.png\" alt=\"\" width=\"572\" height=\"551\" srcset=\"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/files\/2021\/02\/gae_diagram.png 572w, https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/files\/2021\/02\/gae_diagram-300x289.png 300w\" sizes=\"auto, (max-width: 572px) 100vw, 572px\" \/><\/p><p>&nbsp;<\/p><p>In the core of our framework lies a gated autoencoder, which describes the relationship between a pair of images. The inpusts to the gated autoencoder are patches extracted from a pair of parent-offspring images (after alignment), as shown in the figure. The transformation between patches are encoded in the new variable called <i>z, <\/i>and the corresponding filters are learnt by minimizing the following loss function.<\/p><p><\/p><p style=\"text-align: center;\">\\(L_{gen}=\\sum^N_{n=1}\\|y^{'(n)}-y^{(n)}\\|^2 + \\sum^N_{n=1}\\|x^{'(n)}-x^{(n)}\\|^2\\)<\/p><p>We further add a discriminative layer on the top of gated autoencoder to distinguish between related and non-related pair of images. The loss function for the discriminative layer is shown below, in which we try to minimize the error between predicted labels and groundtruth labels.<\/p><p style=\"text-align: center;\">\\(L_{disc}=\\sum^N_{n=1}\\|softmax(T(z^{n}))-GT^{(n)}\\|_1\\)<\/p><h1>Results<\/h1><p><span style=\"line-height: 1.5em;\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-330 aligncenter\" src=\"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/files\/2021\/02\/gae_results.png\" alt=\"\" width=\"631\" height=\"282\" srcset=\"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/files\/2021\/02\/gae_results.png 631w, https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/files\/2021\/02\/gae_results-300x134.png 300w\" sizes=\"auto, (max-width: 631px) 100vw, 631px\" \/>In our experimentation, we look at metric learning methods that only learn a metric whereas we learn both an optimal feature mapping and metric for parent-offspring resemblance. We also looked at t<\/span><span style=\"line-height: 1.5em;\">hree setups, 1) Generative - which only uses the gated autoencoder without metric layer, 2) Anthropological - which uses weighting based on the most genetically meaningful features from anthropological studies, and 3) Discriminative - which learns both the best feature mapping and metric automatically. As shown in the table, we found that our discriminative method outperforms anthropological weights by ~2% and the best performing metric learner by ~3% on the popular kinship verification dataset <a href=\"http:\/\/www3.ece.neu.edu\/~yunfu\/research\/Kinface\/Kinface.htm\">KinFaceW-I<\/a>.<\/span><\/p><p>Our method, however, is most ideally suited for the task of face identification, where we do not want to simply tell whether two images are of the same family, but find an offsprings match in a large pool of parents. For this task, we use the <a href=\"http:\/\/chenlab.ece.cornell.edu\/projects\/KinshipClassification\/index.html\">Family 101<\/a>\u00a0dataset as shown below. In our experimentation, we found that our method performs better than chance on parent-offspring for most relationships at low rank, but the father-son relationship does slightly worse as the rank increases.<\/p><p style=\"text-align: center;\"><img loading=\"lazy\" decoding=\"async\" class=\"alignright size-full wp-image-328\" src=\"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/files\/2021\/02\/Screenshot-2014-08-23-16.02.22.png\" alt=\"\" width=\"1013\" height=\"373\" srcset=\"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/files\/2021\/02\/Screenshot-2014-08-23-16.02.22.png 1013w, https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/files\/2021\/02\/Screenshot-2014-08-23-16.02.22-300x110.png 300w, https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/files\/2021\/02\/Screenshot-2014-08-23-16.02.22-768x283.png 768w\" sizes=\"auto, (max-width: 1013px) 100vw, 1013px\" \/><\/p><h1>Publicity<\/h1><p><a href=\"http:\/\/today.ucf.edu\/whos-daddy-ucf-students-program-computer-find\/\">UCF Today<\/a>\u00a0| <a href=\"http:\/\/www.myfoxorlando.com\/story\/25825602\/ucf-researchers-designing-new-facial-recognition-technology\">Fox Orlando<\/a><\/p><p>https:\/\/www.youtube.com\/watch?v=y0_DzOv8VfY<\/p>\t<\/div>\r\n<\/div>\r\n<\/div>\n\n\t<\/div>\n<\/section>\n\n","protected":false},"featured_media":234,"template":"","fw-portfolio-category":[33],"class_list":["post-226","fw-portfolio","type-fw-portfolio","status-publish","has-post-thumbnail","hentry","fw-portfolio-category-research"],"_links":{"self":[{"href":"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/wp-json\/wp\/v2\/fw-portfolio\/226","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/wp-json\/wp\/v2\/fw-portfolio"}],"about":[{"href":"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/wp-json\/wp\/v2\/types\/fw-portfolio"}],"version-history":[{"count":5,"href":"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/wp-json\/wp\/v2\/fw-portfolio\/226\/revisions"}],"predecessor-version":[{"id":355,"href":"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/wp-json\/wp\/v2\/fw-portfolio\/226\/revisions\/355"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/wp-json\/wp\/v2\/media\/234"}],"wp:attachment":[{"href":"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/wp-json\/wp\/v2\/media?parent=226"}],"wp:term":[{"taxonomy":"fw-portfolio-category","embeddable":true,"href":"https:\/\/enriquegortiz.com\/wordpress\/enriquegortiz\/wp-json\/wp\/v2\/fw-portfolio-category?post=226"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}