Facial Recognition: Your Face is Being Stored and We’re Not Prepared to Stop It

Facial Recognition: Your Face is Being Stored and We’re Not Prepared to Stop It

Written by: Bryce Hoyt

Beginning in 2017, Australian tech entrepreneur Hoan Ton-That founded a startup backed by billionaire Peter Thiel by the name of Clearview AI (Clearview) with the goal of creating a cutting-edge facial recognition technology.[1] Two years later, Clearview emerged with the refined technology and began selling it to law enforcement agencies and private investigators all around the U.S. and Canada.[2] The technology works by uploading a picture of a suspected criminal to the software, a sophisticated algorithm then automatically compares the picture to the Clearview database of over 3 billion photos scraped from publicly available pictures online (e.g., social media sites) to try and discover the person’s identity using biometric indicators such as distance between eyes or shape of the chin.[3] If a match is found, the matching images are presented alongside the social media links where they were found.[4]

So far, over 600 law enforcement agencies in North America have started using the Clearview software with the goal of helping solve shoplifting, identity theft, credit card fraud, murder, and child sexual exploitation cases.[5] Law enforcement are only permitted to use the technology as a lead and cannot yet use the facial recognition technology as evidence in court.[6] Ton-That claims the software has 99% accuracy and does not result in higher errors when searching people of color, a common issue and concern among other facial recognition tools.[7]

Although Ton-That continues to remind the public that this tool is only used for investigative purposes to solve crimes—many people remain skeptical. New Jersey’s attorney general Gurbir Grewal said he was disturbed when he learned about Clearview and ordered law enforcement in the state to stop using the technology until a full review of the company is completed for data privacy and cybersecurity concerns.[8] Additional reports have indicating that Clearview has given access to other clients, including commercial business and billionaires.[9] Ton-That denies any commercial authorization of Clearview, however, the fear remains.

Such a controversial and unprecedented technology does not come without legal ramifications and investigation. Tech giants including Twitter, Google, YouTube, and Facebook have sent cease-and-desist letters to Clearview for scraping their data, echoing the 2018 Cambridge Analytica scandal.[10] Ton-That defends the collection of data, claiming that because the pictures are taken from the public domain, Clearview has a First Amendment right to the publicly available information.[11]

Tech giants aren’t the only ones challenging Clearview’s practices—in January of this year the American Civil Liberties Union (ACLU) filed a class action lawsuit against Clearview in Illinois, one of the only states with a biometric privacy law.[12] The complaint alleges a violation of the Illinois Biometric Information Privacy Act (BIPA) for failing to obtain informed written consent by individuals before collecting and using a person’s biometric data, including facial recognition, as required by the act.[13] The ACLU expressed its’ concern with Clearview, claiming that such a powerful and unregulated technology can lead to governmental tracking of vulnerable communities, such as sexual assault victims and undocumented immigrants; which is the exact sort of behavior privacy legislation is intended to prevent.[14] The ACLU is seeking a court order to force Clearview to delete all photos of Illinois residents gathered without consent and to prevent any further gathering until it is in compliance with the BIPA.[15] Clearview would not be the first organization to have violated the BIPA. This January, Facebook paid a $550 million class action settlement for a violation of BIPA involving it’s “photo tagging” feature, after losing their appeal in the Ninth Circuit in 2019.[16]

The fears of the ACLU are not unfounded, law enforcement agencies across North America have started using Clearview to identify children as young as 13 years old who are victims of sexual assault to try to locate them and attempt to get a statement.[17] Many supporters of the technology claim that it’s the biggest breakthrough in the last decade for child sexual abuse crimes, but many worry of the potential harms in amassing such sensitive data.[18] Privacy advocates remain reluctant to support such technology until it is tested and regulated. Liz O’Sullivan, the technology director at the Surveillance Technology Oversight Project commented, “[t]he exchange of freedom and privacy for some early anecdotal evidence that it might help some people is wholly insufficient to trade away our civil liberties.”[19]

Beyond Clearview, facial recognition software has moved to commercial use—including airports, public venues, and most recently, public schools.[20] The small town of Lockport, New York was one of the first known public schools to adopt facial recognition in the U.S., despite pushback from the community.[21] The technology was installed with the purpose of scanning for weapons and monitoring individuals entering the school; comparing faces to a curated database of prohibited individuals such as sex offenders and barred students/employees.[22] A few cities, including San Francisco, have banned the use of facial recognition tools in their community, even within law enforcement agencies.[23] Although well intentioned, the unique technology presents many privacy concerns that are better off discussed and reconciled before being implemented as common practice.

With facial recognition in the spotlight and a growing concern of the unintended repercussions, a few tech companies including IBM have announced that they will no longer sell facial recognition services—urging for a national dialogue on whether the technology should be used at all.[24] Critics of this public statement note that an additional motive stems from the fact that facial recognition software has not been profitable for IBM up to this point.[25] It also remains unclear whether IBM will continue to research and develop such technology after halting sales. Amazon also announced that they are placing a one-year moratorium on police use of its facial recognition technology due to the current pushback from civil rights groups and police-reform advocates.[26] Microsoft also followed suit in a statement the same week, stating they will no longer sell facial recognition software to police in the U.S. until there is a federal law to regulate the technology.[27]

Facial recognition technology has also gained attention from legislators, resulting in numerous state bills and proposed federal legislation.[28] Among the bills currently circulating at a state level, a controversial bill in California aimed at allowing businesses and government agencies to use facial recognition technology without consent for safety and security purposes with probable cause, has stalled in the legislature.[29] The bill would have also followed the California Consumer Privacy Act (CCPA) by requiring state and local agencies to inform consumers of the facial recognition technology before using it for reasons not related to public safety.[30] Those in opposition of the bill include the ACLU and the Electronic Frontier Foundation (EFF), who claim that the bill would have set very minimal standards for the use of the technology and did not address many of the privacy concerns related to face surveillance.[31]

As of March, Washington state has enacted the first U.S. state law which limits the use of facial recognition technology by law enforcement.[32] The state law (SB 6280), backed by Microsoft, sets limits on the use of facial recognition technology in a few ways: (1) governmental agencies must now obtain a warrant to run facial recognition scans (except in exigent circumstances), (2) the software must pass independent testing to ensure its accuracy, (3) any state or local government agency intending to use such technology must file with a legislative authority a notice of intent to develop, procure, or use a facial recognition service, specifying the purpose for which the technology is to be used, and (4) a state or local government agency intending to use such technology must develop a comprehensive accountability report outlining the purpose of the use, the type of data the technology collects, and various other clarifications on protocol.[33]

Critics of the bill point out that it was sponsored by State Senator Joe Nguyen, who is currently employed at Microsoft, which is perhaps the reason that the bill places far less restrictions on commercial development or sale of the technology.[34] The ACLU was also quick to make a rebuttal to the bill, stating that although the safeguards proffered in the bill are better than none, anything short of a facial recognition ban will not safeguard civil liberties.[35]

At a federal level, a bipartisan bill has been introduced to the Senate referred to as the “Commercial Facial Recognition Privacy Act,” designed to offer legislative oversight for commercial applications of facial recognition technology.[36] The bill would require companies to gain explicit user consent before collecting any facial recognition data and would limit companies from sharing the data with third-parties.[37] The bill, also endorsed by Microsoft, seems to address the commercial side of facial recognition technology that Washington state’s law fails to acknowledge. The consent requirements mimic many other privacy laws—requiring that a company obtain affirmative consent before using the technology, provide the user with concise notice of the capabilities and limitations of the technology, state the specific purpose for which the technology is being employed and a provide a brief description of the data retention and deidentification practices of the processor.[38] The company is thereby limited to the purpose for which they informed the user and must obtain additional affirmative consent if they wish to share the data with a third party or re-purpose the data.[39]

Regardless of whether the bill survives, the proposed legislation provides insight into the mind of Congress and outlines the willingness of tech giants to help navigate a more informed and regulated route through new technological advances such as facial recognition. The macro and micro consequences of such an innovative yet frightening tool are worth skepticism, but perhaps we can find the middle ground between civil advocates and fast-pace tech executives to forge a more privacy conscious future.


[1] See Kashmir Hill, The Secretive Company That Might End Privacy as We Know It, The New York Times (Jan. 18, 2020), https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html.

[2] Id.

[3] See Donie O’Sullivan, This man says he’s stockpiling billions of our photos, CNN (Feb. 10, 2020), https://www.cnn.com/2020/02/10/tech/clearview-ai-ceo-hoan-ton-that/index.html.

[4] Id.

[5] Hill, supra note 1.

[6] O’Sullivan, supra note 3.

[7] Id.

[8] Id.

[9] See Kashmir Hill, Before Clearview Became a Police Tool, It Was a Secret Plaything of the Rich, The New York Times (Mar. 5,  2020), https://www.nytimes.com/2020/03/05/technology/clearview-investors.html. See also Ben Gilbert, Clearview AI scraped billions of photos from social media to build a facial recognition app that can ID anyone — here’s everything you need to know about the mysterious company, Business Insider (Mar. 6, 2020), https://www.businessinsider.com/what-is-clearview-ai-controversial-facial-recognition-startup-2020-3.

[10] Alfred Ng, Steven Musil, Clearview AI hit with cease-and-desist from Google, Facebook over facial recognition collection, cnet (Feb. 5, 2020), https://www.cnet.com/news/clearview-ai-hit-with-cease-and-desist-from-google-over-facial-recognition-collection/.

[11] Id.

[12] See Alfred Ng, Clearview AI faces lawsuit over gathering people’s images without consent, cnet (May 28, 2020), https://www.cnet.com/news/clearview-ai-faces-lawsuit-over-gathering-peoples-images-without-consent/. See also Angelique Carson, ACLU files class-action vs. Clearview AI under biometric privacy law, iapp (May 29, 2020), https://iapp.org/news/a/aclu-files-class-action-vs-clearview-ai-under-biometric-privacy-law/.

[13] Id.

[14] Ng, supra note 8.

[15] Id.

[16] See Patel v. Facebook, Inc., 932 F.3d 1264 (9th Cir. 2019). See also Corrine Reichert, Facebook pays $550M to settle facial recognition privacy lawsuit, cnet (Jan. 29, 2020), https://www.cnet.com/news/facebook-pays-up-550m-for-facial-recognition-privacy-lawsuit/.

[17] Kashmir Hill, Gabriel J.X. Dance, Clearview’s Facial Recognition App Is Identifying Child Victims of Abuse, The New York Times (Feb. 10, 2020), https://www.nytimes.com/2020/02/07/business/clearview-facial-recognition-child-sexual-abuse.html.

[18] Id.

[19] Id.

[20] Davey Alba, Facial Recognition Moves Into a New Front: Schools, The New York Times (Feb. 6, 2020), https://www.nytimes.com/2020/02/06/business/facial-recognition-schools.html.

[21] Id.

[22] Id.

[23] Id.

[24] Devin Coldewey, IBM ends all facial recognition business as CEO calls out bias and inequality, TechCrunch (June 8, 2020), https://techcrunch.com/2020/06/08/ibm-ends-all-facial-recognition-work-as-ceo-calls-out-bias-and-inequality/.

[25] Id.

[26] Bobby Allyn, Amazon Halts Police Use Of Its Facial Recognition Technology, npr (Jun. 10, 2020), https://www.npr.org/2020/06/10/874418013/amazon-halts-police-use-of-its-facial-recognition-technology.

[27] Brian Fung, Tech companies push for nationwide facial recognition law. Now comes the hard part, CNN (June 13, 2020), https://www.cnn.com/2020/06/13/tech/facial-recognition-policy/index.html.

[28] See Taylor Hatmaker, Bipartisan bill proposes oversight for commercial facial recognition, TechCrunch (Mar. 14, 2019), https://techcrunch.com/2019/03/14/facial-recognition-bill-commercial-facial-recognition-privacy-act/.

[29] Ryan Johnston, Facial recognition bill falls flat in California legislature, statescoop (Jun. 4, 2020), https://statescoop.com/facial-recognition-bill-falls-flat-in-california-legislature/.

[30] Id.

[31] Id.

[32] Paresh Dave, Jeffrey Dastin, Washington State signs facial recognition curbs into law; critics want ban, Reuters (Mar. 31, 2020), https://www.reuters.com/article/us-washington-tech/washington-state-signs-facial-recognition-curbs-into-law-critics-want-ban-idUSKBN21I3AS.

[33] See S.B. 6280, 66th Leg., Reg. Sess. (Wash. 2020).

[34] Dave Gershgorn, A Microsoft Employee Literally Wrote Washington’s Facial Recognition Law, OneZero (Apr. 2, 2020), https://onezero.medium.com/a-microsoft-employee-literally-wrote-washingtons-facial-recognition-legislation-aab950396927.

[35] Id.

[36] Hatmaker, supra note 23.

[37] Id.

[38] See S. 847, 116th Cong. (2019).

[39] Id.

Privacy Amid a Global Pandemic

Privacy Amid a Global Pandemic

Written By: Bryce Hoyt

In the wake of all the massive changes due to COVID-19, the IAPP (International Association of Privacy Professionals) partnered with EY (Ernst & Young) to launch a research initiative to gain more insight into the unique ways privacy and data protection practices have been affected by the pandemic. They conducted a survey on a total of 933 privacy professionals between April 8th and 20th.[1] Although working remotely was not entirely unfamiliar for many people, according to their findings, 45% of organizations have adopted a new technology or contracted with a new vendor to enable remote work due to the pandemic.[2]

Due to the severity and urgency of combating such a pandemic resulting in “stay at home orders,” around 60% of organizations rolling out new “working from home” (WFH) technology either skipped or expedited a privacy or security review.[3] On top of existing obligations, the pandemic demanded privacy professionals to add an array of new concerns to their agenda. When asked how organizations’ priorities have changed, about half (48%) said that safeguarding against attacks and threats has become more of a priority for them.[4] Understandably, many otherwise cautious citizens are now required to navigate most of their life through a technological space that is somewhat unfamiliar, not to mention, likely on a less secure at-home network.

Unsurprisingly, a recent study by the Information Systems Audit and Control Association found that many companies are seeing an increase in the number of cyberattacks since the pandemic began.[5] Additionally, since January 1st the FTC has received over 61,000 reports amounting to over $45 million in total fraud losses.[6] The top four categories of complaints include, (1) travel and vacation related reports about cancellations and refunds, (2) reports about online shopping issues, (3) mobile texting scams, and (4) government and business imposter scams.[7] Many of the phishing scams have targeted college students and international supply chain companies.[8] The scam often takes the form of an email, claiming to provide important information and resources relating to things such as the coronavirus relief fund (CARES Act) or providing fake health advice or vaccine information from the Center for Disease Control (CDC).[9] These emails often have you “login” through an unprotected link where they obtain your personal information or have you download a document which installs a form of malware to your desktop and can further obtain personal information and track your activity.[10]

Hacking has also been on the rise—now targeting organizations in the healthcare sector. Among those who have been attacked, the University of California San Francisco (UCSF), who has been instrumental in sampling and antibody testing for COVID-19, has confirmed that it was the target of a ransomware attack.[11] Ransomware attacks generally gain access to secured information and threaten to publish or delete the data unless a monetary payment is made.[12] Additionally, the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI) issued a Public Service Announcement warning organizations researching COVID-19 that they may have been compromised by Chinese cyber threat actors.[13] It appears the race to find a cure has resulted in international intelligence gathering and potential intellectual property theft, however, most of these incidents are still under investigation.

Along with the embarrassing unintended consequences that result from working behind a webcam at home—additional privacy concerns arise when having otherwise protected and privileged conversations at work, are now done at home using a virtual program. For example, therapy sessions, confidential business meetings, college courses/exams, and court hearings are all being held online and the reliability of protection of that data is being questioned.[14] There’s a saying in Silicon Valley, “[i]f the product is free, you are the product.”[15] Many of the videoconferencing companies have been quickly trying to adjust and adapt to the rapid demand and concern for their product, battling complaints and even lawsuits for alleged faulty data protection.[16]

The standout brand Zoom, who we’ve all become familiar with, experienced a surge of 200 million users in March compared to just 10 million the previous year.[17] Despite many companies seeking to extend the enforcement date of The California Consumer Privacy Act (CCPA) out of fear that they are not prepared to deal with the potential data requests due to coronavirus—California’s Attorney General Xavier Becerra’s office has made it clear that enforcement is still set to begin on July 1.[18] Furthermore, the European Data Protection Board (EDPB) released a statement regarding the processing of personal data in the context of the pandemic, clarifying the role of the General Data Protection Regulation (GDPR) during this emergency.[19] The statement emphasized the lawfulness of processing personal data in the context of such an emergency, reiterating provisions such as Article 23—which allows competent public health authorities and employers to process otherwise protected health data for reasons of substantial public interest as it relates to public health.[20] This means that companies are permitted to collect and share information relevant to their employees status of COVID-19 to ensure public safety, so long as such collection is properly limited and not communicated beyond what’s necessary; urging companies to aggregate and anonymize the data when possible.[21] According to those surveyed by the IAPP, about 19% of organizations have shared the names of staff diagnosed with COVID-19 with a third party.[22]

Moving forward, organizations and privacy professionals are working around the clock to ensure compliance with privacy legislation like the GDPR and CCPA and are attempting to resolve the issues above as quickly as possible. For example, Google is working with the World Health Organization (WHO) to implement safeguards against the new phishing and malware threats.[23] The FTC is also increasing its efforts to raise awareness of these scams, creating new guides and resources for the general public to better navigate the “new normal.”[24] The FTC is also sending warning letters to any company falsely promoting a cure or treatment for COVID-19, creating a list of all companies making false claims.[25] The Senate also announced they intend to introduce federal privacy legislation that will preempt state privacy laws, coined the “COVID-19 Consumer Data Protection Act.”[26] This act is intended to help regulate the data collection and processing of personal information in connection with the pandemic.[27]

The balancing act between privacy and pandemic interests carries on and only time will tell the reasonableness of the response. In the meantime, governments and privacy professionals are keeping an eye on the new technologies being implemented such as thermal imaging, contact tracing, and video surveillance. Many of us remain hopeful that regardless of the efficacy of this emergency privacy legislation, there appears to be a growing societal and governmental concern and acknowledgment for protecting privacy interests.


[1] Müge Fazlioglu, Privacy in the Wake of COVID-19: Remote Work, Employee Health Monitoring and Data Sharing, International Association of Privacy Professionals (May 2020), https://iapp.org/media/pdf/resource_center/iapp_ey_privacy_in_wake_of_covid_19_report.pdf.

[2] Id. at 5.

[3] Id.

[4] Id.

[5] ISACA, ISACA Survey: Cybersecurity Attacks Are Rising During COVID-19, But Only Half of Organizations Say Their Security Teams Are Prepared for Them, ISACA (April 2020), https://www.isaca.org/why-isaca/about-us/newsroom/press-releases/2020/isaca-survey-cybersecurity-attacks-are-rising-during-covid-19.

[6] Fed. Trade Comm’n, Coronavirus (COVID-19) Consumer Complaint Data (2020), https://www.ftc.gov/system/files/attachments/coronavirus-covid-19-consumer-complaint-data/covid-19-daily-public-complaints-060220.pdf.

[7] Id.

[8] See Sherrod Degrippo, Coronavirus-themed Attacks Target Global Shipping Concerns, proofpoint (Feb. 10 2020), https://www.proofpoint.com/us/threat-insight/post/coronavirus-themed-attacks-target-global-shipping-concerns. See also Ari Lazarus, COVID-19 scams targeting college students, Fed. Trade Comm’n (May 27, 2020), https://www.consumer.ftc.gov/blog/2020/05/covid-19-scams-targeting-college-students.

[9] See Lazarus, supra note 8. See also Steve Symanovich, Coronavirus phishing emails: How to protect against COVID-19 scams, NortonLifeLock (2020), https://us.norton.com/internetsecurity-online-scams-coronavirus-phishing-scams.html.

[10] Id.

[11] Kartikay Mehrotra, Hackers Target California University Leading Covid-19 Research, Bloomberg (June 3, 2020), https://www.bloomberg.com/news/articles/2020-06-04/hackers-target-california-university-leading-covid-19-research.

[12] Id.

[13] Chinese Malicious Cyber Activity, Cybersecurity & Infrastructure Security Agency (2020), https://www.us-cert.gov/china.

[14] The Editorial Board, Privacy Cannot Be a Casualty of the Coronavirus, The New York Times (Apr. 7, 2020), https://www.nytimes.com/2020/04/07/opinion/digital-privacy-coronavirus.html.

[15] Id.

[16] Hurvitz v. Zoom Video Communications, Inc., No. 2:20-cv-03400, (C.D. Cal. Apr. 12, 2020), https://loevy-content-uploads.s3.amazonaws.com/uploads/2020/04/Todd-Hurvitz-et-al-v.-Zoom-et-al.pdf.

[17] The Editorial Board, supra note 11.

[18] Dustin Gardiner, Coronavirus sparks new fight over California’s internet privacy law, San Francisco Chronicle (May 5, 2020), https://www.sfchronicle.com/politics/article/Coronavirus-sparks-new-fight-over-California-s-15246541.php.

[19] Andrea Jelinek, Statement on the processing of personal data in the context of the COVID-19 outbreak, European Data Protection Board (Mar. 19, 2020), https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_statement_2020_processingpersonaldataandcovid-19_en.pdf.

[20] Id.

[21] Id.

[22] Fazlioglu, supra note 1.

[23] Kim Lyons, Google saw more than 18 million daily malware and phishing emails related to COVID-19 last week, The Verge (Apr. 16, 2020), https://www.theverge.com/2020/4/16/21223800/google-malware-phishing-covid-19-coronavirus-scams.

[24] Fed. Trade Comm’n, Coronavirus Advice for Consumers, Fed. Trade Comm’n (2020), https://www.ftc.gov/coronavirus/scams-consumer-advice.

[25] Lesley Fair, 45 more companies get coronavirus warning letters, Fed. Trade Comm’n (May 7, 2020), https://www.ftc.gov/news-events/blogs/business-blog/2020/05/45-more-companies-get-coronavirus-warning-letters.

[26] Glenn Brown, Senate to Introduce “COVID-19 Consumer Data Protection Act”, The National Law Review (May 6, 2020), https://www.natlawreview.com/article/senate-to-introduce-covid-19-consumer-data-protection-act.

[27] Id.

Important: Read our blog and commenting guidelines before using the USF Blogs network.

Skip to toolbar