本文首次探讨了人脸分析算法错误分类的图像(例如,性别分类)是否更有可能或更少地参与导致人脸识别错误的图像对。我们分析了三种不同的性别分类算法(一种开源和两种商业)和两种人脸识别算法(一种开源和一种商业)在代表四个人口统计组(非裔美国女性和男性,白人女性和男性)的图像集上的结果。对于冒名顶替者图像对,我们的结果显示,一个图像有性别分类错误的对比两个图像都有正确性别分类的对有更好的冒名顶替者分布,因此不太可能产生错误的匹配错误。对于真实的图像对,我们的结果显示,与所有的图像都有正确的性别分类的个体相比,图像中包含正确和错误性别分类的个体具有更糟糕的真实分布(虚假不匹配率增加)。因此,与生成正确性别分类的图像相比,生成性别分类错误的图像确实产生了不同的识别错误模式,更好的(假匹配)和更坏的(假非匹配)。
原文题目:Does Face Recognition Error Echo Gender Classification Error?
原文:This paper is the first to explore the question of whether images that are classified incorrectly by a face analytics algorithm (e.g., gender classification) are any more or less likely to participate in an image pair that results in a face recognition error. We analyze results from three different gender classification algorithms (one open-source and two commercial), and two face recognition algorithms (one open-source and one commercial), on image sets representing four demographic groups (African-American female and male, Caucasian female and male). For impostor image pairs, our results show that pairs in which one image has a gender classification error have a better impostor distribution than pairs in which both images have correct gender classification, and so are less likely to generate a false match error. For genuine image pairs, our results show that individuals whose images have a mix of correct and incorrect gender classification have a worse genuine distribution (increased false non-match rate) compared to individuals whose images all have correct gender classification. Thus, compared to images that generate correct gender classification, images that generate gender classification errors do generate a different pattern of recognition errors, both better (false match) and worse (false non-match).
人脸识别错误与性别分类错误相同吗.pdf