0

Current Face Recognition Technology Perpetuates Racial and Gender Bias

Written by Christina Nieves

Face recognition technology is the automated process of attempting to identify a person by comparing their image against other images or videos of human faces.[1] This biometric technology runs on algorithms trained by developers to analyze and quantify the unique physical characteristics of a person’s face[2] to generate a digital faceprint analogous to a digital fingerprint.[3] The algorithm then compares the person’s faceprint against other faceprints stored in its gallery database.[4] Lastly, the algorithm produces a numerical score representing the similarity of the person’s features to those of one or more faceprints.[5]

This seemingly objective technology appears to be an efficient tool to enhance security in many aspects of life. Some banks use facial recognition to reduce fraud [6], and some phones and devices require face verification for access.[7] However, facial recognition is often poorly developed and misused, and the consequences are largely felt by women and communities of color, particularly Black and Brown folks with darker skin tones.

Facial recognition combined with discriminatory practices raise privacy concerns and threaten basic civil liberties and rights. In 2020, the Minneapolis Police Department used facial recognition to identify protestors who marched to end police brutality following the death of George Floyd.[8] In 2016, the Detroit Police Department gained access to real-time surveillance video from over 500 locations in the city, where the majority of citizens are Black.[9] Facial recognition, unlike more traditional identification tools like fingerprinting or DNA analysis, can potentially identify people from afar and without their knowledge.[10]

Even if used responsibly, facial recognition technology is often inaccurate. People of color suffer the greatest risk of misidentification.[11] Black and Asian faces are misidentified 10 to 100 times more often than white faces.[12] Young Black women are identified with the poorest accuracy.[13] A misidentification resulting from an unreliable face recognition algorithm could lead to wrongful arrests[14] and even wrongful convictions.[15]

How can this automated technology be racist and sexist?

Some factors like image quality and light exposure can affect performance results.[16]. Like any other automated technology, facial recognition systems work depending on how they are configured.[17] Software developers have to train algorithms to recognize faces.[18] If, for example, the software developers are mostly white men who use sets of mostly white male faces to train a facial recognition algorithm, the algorithm will recognize white male faces with the highest accuracy at the expense of faces of other skin tones and genders.[19]

The limitations of facial recognition technology are well documented, but numerous law enforcement and government agencies continue to use it. A 2022 report from the United States Government Accountability Office (GAO) found that most of the 24 federal agencies surveyed use facial recognition, including the Department of Homeland Security.[20] Four federal agencies used Clearview AI, a controversial commercial facial recognition company that attempts to identify faces against images scraped from public social media accounts and other publicly available images.[21]

Recognizing the technology’s substantial risks, cities like Oakland and San Francisco have passed limitations to the use of facial recognition technology, while states like Oregon and Massachusetts have passed broader, state-wide bans.[22] But our civil rights and liberties should not vary so much depending on the city or state in which we find ourselves. Rather than patchwork legislation, federal law is needed to provide people nationwide with a clearer understanding of their rights regarding facial recognition. On February 16, President Biden signed Executive Order 14091 on Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government, which, in part, instructs agencies to ensure that their use of artificial intelligence advances equity. [23] This is a step in the right direction, but it may still not be enough.

Face recognition technology offers benefits that could make society safer, but it should make society safer for everyone. The use of face recognition by law enforcement agencies, from the local to federal levels, is inevitable. But what is preventable are the biases that the algorithms learn during development. Until these biases are eliminated, a moratorium on facial recognition technology should be enforced.

[1] Joy Buolamwini et al., Facial Recognition Technologies: A PRIMER, Algorithmic Justice League 1, 5 (May 29, 2020), https://assets.website-files.com/5e027ca188c99e3515b404b7/5ed1002058516c11edc66a14_FRTsPrimerMay2020.pdf.

[2] Id. at 8.

[3] Id. at 10.

[4] Id.

[5] Clare Garvie et al., The Perpetual Line-Up: Unregulated Police Face Recognition In America, Georgetown L.: Ctr. on Priv. & Tech. 1, 9 (Oct. 18, 2016), https://www.perpetuallineup.org/sites/default/files/2016-12/The%20Perpetual%20Line-Up%20-%20Center%20on%20Privacy%20and%20Technology%20at%20Georgetown%20Law%20-%20121616.pdf.

[6] Katyanna Quach, Banks across America test facial recognition cameras ‘to spy on staff, customers’, The Register(Apr. 24, 2021), https://www.theregister.com/2021/04/24/in_brief_ai/.

[7] Buolamwini et al., supra note 2, at 7.

[8] Bryan McMahon, How the Police Use AI to Track and Identify You, The Gradient (Oct. 3, 2020), https://thegradient.pub/how-the-police-use-ai-to-track-and-identify-you/.

[9] Alfred Ng, Facial recognition has always troubled people of color. Everyone should listen, CNET (June 12, 2020, 5:00 AM), https://www.cnet.com/news/politics/facial-recognition-has-always-troubled-people-of-color-everyone-should-listen/.

[10] Alvaro M. Bedoya, You Cannot Encrypt Your Face, The Atlantic (May 5, 2017), https://www.theatlantic.com/technology/archive/2017/05/you-cannot-encrypt-your-face/524073/.

[11] Drew Harwell, Federal study confirms racial bias of many facial-recognition systems, casts doubt on their expanding use, The Washington Post (Dec. 19, 2019, 6:43 PM), https://www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/.

[12] Daniele Selby, How Racial Bias Contributes to Wrongful Conviction, Innocence Project (June 17, 2021), https://innocenceproject.org/how-racial-bias-contributes-to-wrongful-conviction/.

[13] Id.

[14] Id.

[15] Tate Ryan-Mosley, The new lawsuit that shows facial recognition is officially a civil rights issue, MIT Technology Review (Apr. 14, 2021), https://www.technologyreview.com/2021/04/14/1022676/robert-williams-facial-recognition-lawsuit-aclu-detroit-police/.

[16] Buolamwini et al., supra note 2, at 16.

[17] Garvie et al., supra note 6, at 54.

[18] Buolamwini et al., supra note 2, at 10.

[19] Garvie et al., supra note 6, at 53-54.

[20] U.S. Gov’t Accountability Off., GAO-22-106100, Facial Recognition Technology: Federal Agencies’ Use and Related Privacy Protections (June 29, 2022).

[21] Id. at 9.

[22] Taylor Hatmaker & Zack Whittaker, Massachusetts lawmakers vote to pass a statewide police ban on facial recognition, TechCrunch (Dec. 1, 2020, 4:00 PM), https://techcrunch.com/2020/12/01/massachusetts-votes-to-pass-statewide-police-ban-on-facial-recognition/.

[23] Exec. Order No. 14091, 88 Fed. Reg. 10825 (Feb. 16, 2023).

cfreeman2

Leave a Reply

Your email address will not be published. Required fields are marked *