Please use this identifier to cite or link to this item: http://hdl.handle.net/1893/34654
Appears in Collections:Psychology Journal Articles
Peer Review Status: Refereed
Title: Simulated Automated Facial Recognition Systems as Decision-Aids in Forensic Face Matching Tasks
Other Titles: Simulated AFRS as decision-aids in face matching
Author(s): Carragher, Daniel J
Hancock, Peter J B
Contact Email: p.j.b.hancock@stir.ac.uk
Keywords: human-algorithm teaming
face recognition
automation
verification
collaborative decision-making
Issue Date: 1-Dec-2022
Date Deposited: 4-Nov-2022
Citation: Carragher DJ & Hancock PJB (2022) Simulated Automated Facial Recognition Systems as Decision-Aids in Forensic Face Matching Tasks [Simulated AFRS as decision-aids in face matching]. <i>Journal of Experimental Psychology: General</i>. https://doi.org/10.1037/xge0001310
Abstract: Automated Facial Recognition Systems (AFRS) are used by governments, law enforcement agencies and private businesses to verify the identity of individuals. While previous research has compared the performance of AFRS and humans on tasks of one-to-one face matching, little is known about how effectively human operators can use these AFRS as decision-aids. Our aim was to investigate how the prior decision from an AFRS affects human performance on a face matching task, and to establish whether human oversight of AFRS decisions can lead to collaborative performance gains for the human algorithm team. The identification decisions from our simulated AFRS were informed by the performance of a real, state-of-the-art, Deep Convolutional Neural Network (DCNN) AFRS on the same task. Across five pre-registered experiments, human operators used the decisions from highly accurate AFRS (>90%) to improve their own face matching performance compared to baseline (sensitivity gain: Cohen’s d = 0.71-1.28; overall accuracy gain: d = 0.73-1.46). Yet, despite this improvement, AFRS-aided human performance consistently failed to reach the level that the AFRS achieved alone. Even when the AFRS erred only on the face pairs with the highest human accuracy (>89%), participants often failed to correct the system’s errors, while also overruling many correct decisions, raising questions about the conditions under which human oversight might enhance AFRS operation. Overall, these data demonstrate that the human operator is a limiting factor in this simple model of human-AFRS teaming. These findings have implications for the “human-in-the-loop” approach to AFRS oversight in forensic face matching scenarios
DOI Link: 10.1037/xge0001310
Rights: ©American Psychological Association, 2022. This paper is not the copy of record and may not exactly replicate the authoritative document published in the APA journal. The final article is available, upon publication, at: https://doi.org/10.1037/xge0001310
Notes: Output Status: Forthcoming/Available Online

Files in This Item:
File Description SizeFormat 
Carragher_Hancock2022_SimulatedAFRS_accepted.pdfFulltext - Accepted Version3.91 MBAdobe PDFView/Open



This item is protected by original copyright



Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/

If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.