Blackbox: Online voting in the 2020 elections

Blackbox: Online voting in the 2020 elections


Written By: Michael Walsh

A Byte of Online Voting

Sorry, you cannot vote online in the primaries or in presidential elections this year. That is, unless you have been selected to participate in one of the few small-scale pilot programs, such as the DemocracyLive system in Seattle, Washington, the Voatz platform in West Virginia, or most recently, the Shadow voting tool used for the 2020 Iowa caucuses just a few weeks ago.[1] These voting tools use blockchain technology to generate a unique hash for each vote.[2] To mitigate the risk of election tampering, the votes are submitted, but not counted electronically. [3] Each electronic submission is verified with a printed version of the ballot, then the printed ballots are tallied to calculate the total number of votes.[4] These electronic systems are usually deployed in areas in which voter turnouts are low or voting is only possible by remote means.[5] Ideally, these types of services may help improve voter turnout in the United states—a country in which less than 56% of voting-age adults participated in the 2016 presidential election.[6]

There is little federal oversight for online voting infrastructure, but Congress allocated an additional $380 million for voting infrastructure and security improvements,[7] and 85% of those funds are estimated by the U.S. Elections Assistance Commission to be used by states before the 2020 election.[8] Ideally, those funds will help to alleviate problems in areas with intermittent or low bandwidth internet connections, such as some of the precincts that experienced problems with the Shadow voting app during the 2020 Iowa caucuses.[9] Additionally, a slew of other bills has been introduced to help secure elections from (predominantly foreign) interference (see S.2669; H.R. 1946; H.R. 4990).[10] One amendment to the Help America Vote Act (HAVA) of 2002, passed in December 2019, allocated an additional $400 million to help secure voting infrastructure.[11] However, some experts indicate that modernizing and securitizing current voting infrastructure would cost nearly $2.5 billion, not considering recurring maintenance costs.[12] To modernize Pennsylvania’s infrastructure alone is estimated to cost upwards of $150 million, which accounts for nearly half of the total HAVA funds allotted from Congress.[13]

Election Security Concerns and the 2016 Election

The costs to establish secure voting infrastructure do not seem so exorbitant when considering voter trust. 2016 marked the first year in which Russian interference influenced the presidential elections.[14] This foreign interference happened not by meddling with voter infrastructure (which now usually verifies electronically submitted votes with paper ballots), but by alternative means such as phishing, distributed denial-of-service (“DDoS”), and denial-of-service (“DoS”) attacks.[15] These kinds of interference will certainly not be the last.[16] A recent national survey asked politicians about cybersecurity risks, “[f]orty percent said they’ve had an account compromised in a phishing attack. And 60% said they haven’t significantly updated the security of their accounts since 2016.”[17] Even without direct interference with voter infrastructure, threat actors can make a meaningful difference in the outcome of elections with phishing, DDoS and DoS attacks on other vectors including campaign email accounts or insecure servers used by political groups. In response, Microsoft and Google (the companies that provide the most popular email services in the nation) have been implementing security measures to prevent these attacks. Most countermeasures focus on implementing typical information security protections, such as multi-factor authentication, tokenization, and software-based mitigation techniques, such as spoofing and phishing detection.[18]

Experts still have many questions about the security and privacy of electronic voting systems, most particularly those that are completely paperless.[19] Nevertheless, some voting this year will be done in select states by phone or PC through the Voatz system (but with paper ballot verification).[20] Voatz uses blockchain technology paired with biometric data from users’ phones, such as face scans and fingerprints. Although this version of multi-factor authentication may alleviate fraudulent voting, it poses serious privacy concerns[21] and does not address other salient security risks of online voting, such as phishing, DDoS, and DoS attacks. Regardless, the future of voting is likely to be a digital one, as a recent study from University of Chicago found. The survey estimated that voter turnout could increase by several percentage points,[22] a figure that could compound with the help of universally compatible voting technology.


[1] Emily S. Rueb, Voting by Phone Gets a Big Test, but There Are Concerns, The New York Times (Jan. 23, 2020), [].

[2] Voatz, Frequently Asked Questions, [].

[3] Id.

[4] Id.

[5] Rueb, supra note 1; Emily Dreyfuss, Smartphone Voting Is Happening, but No One Knows if It’s Safe, Wired (Aug. 9, 2018), [].

[6] Drew Desilver, U.S. Trails Most Developed Countries in Voter Turnout, Pew Research Center (May, 21 2018), [].

[7] The Impact of HAVA Funding on the 2018 Elections, U.S. Election Assistance Commission (2019), [].

[8] Id.; U.S Senate Committee on Rules and Administration Oversight of the Election Assistance Commission, U.S Election Assistance Commission (May 15, 2019), []; Elizabeth Howard, Defending Elections: Federal Funding Needs for State Election Security, The Brennan Center (July 18th, 2019), [].

[9] Kevin Roose, The Only Safe Election is A Low-Tech Election, The New York Times (Feb. 4, 2020),, []; Nick Corasaniti, Sheera Frenkel and Nicole Perlroth, App Used to Tabulate Votes is Said to Have Been Inadequately Tested, The New York Times (Feb. 3, 2020), []; Keith Collins, Denise Lu, Charlie Smart, We Checked the Iowa Caucus Math. Here’s Where it Didn’t Add Up, The New York Times (Feb. 14 2020), [].

[10] S.2669, 116th Cong. (2019); H.R. 1946, 116th Cong. (2019); H.R. 4990, 116th Cong. (2019).

[11] U.S. Election Assistance Commission, How Can The States Use the Funds? (Jan. 6, 2020) []; H.R. 1158 § 501, 116th Cong. (2019).

[12] Lawrence Norden and Edgardo Cortez, What Does Election Security Cost?, The Brennan Center (Aug. 15, 2019), [].

[13] Howard, supra note 8.

[14] U.S. Senate Committee 116th Congress, Report of the Select Committee on Intelligence United States Senate on Russian Active Measures Campaigns and Interference in the 2016 U.S. Election Volume 1: Russian Efforts Against Election Infrastructure With Additional Views []; Andy Greenberg, Feds’ Damning Report on Russian Election Hack Won’t Convince Skeptics, Wired (Jan. 6, 2017), []; David E. Sanger and Catie Edmonson, Russia Targeted Election Systems in All 50 States, Report Finds, The New York Times (July 25, 2019), [].

[15] Andy Greenberg, Everything We Know About Russia’s Election-Hacking Playbook, Wired (June 9 2017), []; Shannon Bond, 2020 Political Campaigns Are Trying To Avoid A 2016-Style Hack, Nat’l Pub. Radio (Jan. 28, 2020), []; Jeremey Ashkenas, Was It a 400-Pound, 14-Year-Old Hacker, or Russia? Here’s Some of the Evidence, The New York Times (Jan. 26, 2017), [].

[16] Miles Parks, Russian Hackers Targeted The Most Vulnerable Part Of U.S. Elections Again, Nat’l Pub. Radio (July 28, 2018), []; Shannon Bond, Microsoft Says Iranians Tried To Hack U.S. Presidential Campaign, Nat’l Pub. Radio (Oct. 4, 2019), [].

[17] Bond, supra note 15.

[18] Tom Burt, Protecting Democracy with Microsoft AccountGuard, Microsoft Blog (August 20, 2018), []; Lily Hay Newman, Google’s Giving Out Security Keys to Help Protect Campaigns, Wired (Feb. 11, 2020), [].

[19] David Jefferson et al., What We Don’t Know About the Voatz “Blockchain” Internet Voting, System (May 1, 2019), []; Michael A. Specter et al, The Ballot is Busted Before the Blockchain: A Security Analysis of Voatz, the First Internet Voting Application Used in U.S. Federal Elections, Mass.  Inst. of Tech., []; Abby Abazorius, MIT Researchers Identify Security Vulnerabilities in Voting App, MIT News (Feb. 13, 2020), []; Robby Mook et al., Cybersecurity Campaign Playbook, Harv. Kennedy School Belfer Center (Nov. 2017), []; Miles Parks, In 2020, Some Americans Will Vote On Their Phones. Is That The Future?, Nat’l Pub. Radio (Nov 7, 2019), [].

[20] Voatz, supra note 2.

[21] Jefferson, supra note 19.

[22] David Stone, Jul 30, 2019 West Virginia Was the First State to Use Mobile Voting. Should others follow? U. of Chi. (July 30, 2019), []; Anthony Fowler, Promises and Perils of Mobile Voting, U. of Chi. (June 2019), [].


Facial Recognition: Your Face is Being Stored and We’re Not Prepared to Stop It

Facial Recognition: Your Face is Being Stored and We’re Not Prepared to Stop It

Written by: Bryce Hoyt

Beginning in 2017, Australian tech entrepreneur Hoan Ton-That founded a startup backed by billionaire Peter Thiel by the name of Clearview AI (Clearview) with the goal of creating a cutting-edge facial recognition technology.[1] Two years later, Clearview emerged with the refined technology and began selling it to law enforcement agencies and private investigators all around the U.S. and Canada.[2] The technology works by uploading a picture of a suspected criminal to the software, a sophisticated algorithm then automatically compares the picture to the Clearview database of over 3 billion photos scraped from publicly available pictures online (e.g., social media sites) to try and discover the person’s identity using unique biometric indicators such as distance between eyes or shape of the chin.[3] If a match is found, the matching images are presented alongside the social media links where they were found.[4]

So far, over 600 law enforcement agencies in North America have started using the Clearview software with the goal of helping solve shoplifting, identity theft, credit card fraud, murder, and child sexual exploitation cases.[5] Law enforcement are only permitted to use the technology as a lead and cannot yet use the facial recognition technology as evidence in court.[6] Ton-That claims the software has 99% accuracy and does not result in higher errors when searching people of color, a common issue and concern among other facial recognition tools.[7]

Although Ton-That continues to remind the public that this tool is only used for investigative purposes to solve crimes—many people remain skeptical. New Jersey’s attorney general Gurbir Grewal said he was disturbed when he learned about Clearview and ordered law enforcement in the state to stop using the technology until a full review of the company is completed for data privacy and cybersecurity concerns.[8] Additional reports have indicated that Clearview has given access to other clients, including commercial business and billionaires.[9] Ton-That denies any commercial authorization of Clearview, however, the fear remains.

Such a controversial and unprecedented technology does not come without legal ramifications and investigation. Tech giants including Twitter, Google, YouTube, and Facebook have sent cease-and-desist letters to Clearview for scraping their data, echoing the 2018 Cambridge Analytica scandal.[10] Ton-That defends the collection of data, claiming that because the pictures are taken from the public domain, Clearview has a First Amendment right to the publicly available information.[11]

Tech giants aren’t the only ones challenging Clearview’s practices—in January of this year the American Civil Liberties Union (ACLU) filed a class action lawsuit against Clearview in Illinois, one of the only states with a biometric privacy law.[12] The complaint alleges a violation of the Illinois Biometric Information Privacy Act (BIPA) for failing to obtain informed written consent by individuals before collecting and using a person’s biometric data, including facial recognition, as required by the act.[13] The ACLU expressed its’ concern with Clearview, claiming that such a powerful and unregulated technology might lead to governmental tracking of vulnerable communities such as sexual assault victims and undocumented immigrants—which is the exact sort of behavior privacy legislation is intended to prevent.[14] The ACLU is seeking a court order to force Clearview to delete all photos of Illinois residents gathered without consent and to prevent any further gathering until the organization is in compliance with the BIPA.[15] Clearview would not be the first organization to have violated the BIPA. This January, Facebook paid a $550 million class action settlement for a violation of BIPA involving it’s “photo tagging” feature, after losing their appeal in the Ninth Circuit in 2019.[16]

The fears of the ACLU are not unfounded, law enforcement agencies across North America have started using Clearview to identify children as young as 13 years old who are victims of sexual assault to try to locate them and attempt to get a statement.[17] Many supporters of the technology claim that it’s the biggest breakthrough in the last decade for child sexual abuse crimes, but many worry of the potential harms in amassing such sensitive data.[18] Privacy advocates remain reluctant to support such technology until it is tested and regulated. Liz O’Sullivan, the technology director at the Surveillance Technology Oversight Project commented, “[t]he exchange of freedom and privacy for some early anecdotal evidence that it might help some people is wholly insufficient to trade away our civil liberties.”[19]

Beyond Clearview, facial recognition software has moved to commercial use—including airports, public venues, and most recently, public schools.[20] The small town of Lockport, New York was one of the first known public schools to adopt facial recognition in the U.S., despite pushback from the community.[21] The technology was installed with the purpose of scanning for weapons and monitoring individuals entering the school; comparing faces to a curated database of prohibited individuals such as sex offenders and barred students/employees.[22] A few cities, including San Francisco, have banned the use of facial recognition tools in their community, even within law enforcement agencies.[23] Although well intentioned, the unique technology presents many privacy concerns that are better off discussed and reconciled before being implemented as common practice.

With facial recognition in the spotlight and a growing concern of the unintended repercussions, a few tech companies including IBM have announced that they will no longer sell facial recognition services—urging for a national dialogue on whether the technology should be used at all.[24] Critics of this public statement note that an additional motive stems from the fact that facial recognition software has not been profitable for IBM up to this point.[25] It also remains unclear whether IBM will continue to research and develop such technology after halting sales. Amazon also announced that they are placing a one-year moratorium on police use of its facial recognition technology due to the current pushback from civil rights groups and police-reform advocates.[26] Microsoft also followed suit in a statement the same week, stating they will no longer sell facial recognition software to police in the U.S. until there is a federal law to regulate the technology.[27]

Facial recognition technology has also gained attention from legislators, resulting in numerous state bills and proposed federal legislation.[28] Among the bills currently circulating at a state level, a controversial bill in California aimed at allowing businesses and government agencies to use facial recognition technology without consent for safety and security purposes with probable cause, has stalled in the legislature.[29] The bill would have also followed the California Consumer Privacy Act (CCPA) by requiring state and local agencies to inform consumers of the facial recognition technology before using it for reasons not related to public safety.[30] Those in opposition of the bill include the ACLU and the Electronic Frontier Foundation (EFF), who claim that the bill would have set very minimal standards for the use of the technology and did not address many of the privacy concerns related to face surveillance.[31]

As of March, Washington state has enacted the first U.S. state law which limits the use of facial recognition technology by law enforcement.[32] The state law (SB 6280), backed by Microsoft, sets limits on the use of facial recognition technology in a few ways: (1) governmental agencies must now obtain a warrant to run facial recognition scans (except in exigent circumstances), (2) the software must pass independent testing to ensure its accuracy, (3) any state or local government agency intending to use such technology must file with a legislative authority a notice of intent to develop, procure, or use a facial recognition service, specifying the purpose for which the technology is to be used, and (4) a state or local government agency intending to use such technology must develop a comprehensive accountability report outlining the purpose of the use, the type of data the technology collects, and various other clarifications on protocol.[33]

Critics of the bill point out that it was sponsored by State Senator Joe Nguyen, who is currently employed at Microsoft, which is perhaps the reason that the bill places far less restrictions on commercial development or sale of the technology.[34] The ACLU was also quick to make a rebuttal to the bill, stating that although the safeguards proffered in the bill are better than none, anything short of a facial recognition ban will not safeguard civil liberties.[35]

At a federal level, a bipartisan bill has been introduced to the Senate referred to as the “Commercial Facial Recognition Privacy Act,” designed to offer legislative oversight for commercial applications of facial recognition technology.[36] The bill would require companies to gain explicit user consent before collecting any facial recognition data and would limit companies from sharing the data with third-parties.[37] The bill, also endorsed by Microsoft, seems to address the commercial side of facial recognition technology that Washington state’s law fails to acknowledge. The consent requirements mimic many other privacy laws—requiring that a company obtain affirmative consent before using the technology, provide the user with concise notice of the capabilities and limitations of the technology, state the specific purpose for which the technology is being employed and a provide a brief description of the data retention and deidentification practices of the processor.[38] The company is thereby limited to the purpose for which they informed the user and must obtain additional affirmative consent if they wish to share the data with a third party or re-purpose the data.[39]

Regardless of whether the bill survives, the proposed legislation provides insight into the mind of Congress and outlines the willingness of tech giants to help navigate a more informed and regulated route through new technological advances such as facial recognition. The macro and micro consequences of such an innovative yet frightening tool are worth skepticism, but perhaps we can find the middle ground between civil advocates and fast-pace tech executives to forge a more privacy conscious future.

[1] See Kashmir Hill, The Secretive Company That Might End Privacy as We Know It, The New York Times (Jan. 18, 2020),

[2] Id.

[3] See Donie O’Sullivan, This man says he’s stockpiling billions of our photos, CNN (Feb. 10, 2020),

[4] Id.

[5] Hill, supra note 1.

[6] O’Sullivan, supra note 3.

[7] Id.

[8] Id.

[9] See Kashmir Hill, Before Clearview Became a Police Tool, It Was a Secret Plaything of the Rich, The New York Times (Mar. 5,  2020), See also Ben Gilbert, Clearview AI scraped billions of photos from social media to build a facial recognition app that can ID anyone — here’s everything you need to know about the mysterious company, Business Insider (Mar. 6, 2020),

[10] Alfred Ng, Steven Musil, Clearview AI hit with cease-and-desist from Google, Facebook over facial recognition collection, cnet (Feb. 5, 2020),

[11] Id.

[12] See Alfred Ng, Clearview AI faces lawsuit over gathering people’s images without consent, cnet (May 28, 2020), See also Angelique Carson, ACLU files class-action vs. Clearview AI under biometric privacy law, iapp (May 29, 2020),

[13] Id.

[14] Ng, supra note 8.

[15] Id.

[16] See Patel v. Facebook, Inc., 932 F.3d 1264 (9th Cir. 2019). See also Corrine Reichert, Facebook pays $550M to settle facial recognition privacy lawsuit, cnet (Jan. 29, 2020),

[17] Kashmir Hill, Gabriel J.X. Dance, Clearview’s Facial Recognition App Is Identifying Child Victims of Abuse, The New York Times (Feb. 10, 2020),

[18] Id.

[19] Id.

[20] Davey Alba, Facial Recognition Moves Into a New Front: Schools, The New York Times (Feb. 6, 2020),

[21] Id.

[22] Id.

[23] Id.

[24] Devin Coldewey, IBM ends all facial recognition business as CEO calls out bias and inequality, TechCrunch (June 8, 2020),

[25] Id.

[26] Bobby Allyn, Amazon Halts Police Use Of Its Facial Recognition Technology, npr (Jun. 10, 2020),

[27] Brian Fung, Tech companies push for nationwide facial recognition law. Now comes the hard part, CNN (June 13, 2020),

[28] See Taylor Hatmaker, Bipartisan bill proposes oversight for commercial facial recognition, TechCrunch (Mar. 14, 2019),

[29] Ryan Johnston, Facial recognition bill falls flat in California legislature, statescoop (Jun. 4, 2020),

[30] Id.

[31] Id.

[32] Paresh Dave, Jeffrey Dastin, Washington State signs facial recognition curbs into law; critics want ban, Reuters (Mar. 31, 2020),

[33] See S.B. 6280, 66th Leg., Reg. Sess. (Wash. 2020).

[34] Dave Gershgorn, A Microsoft Employee Literally Wrote Washington’s Facial Recognition Law, OneZero (Apr. 2, 2020),

[35] Id.

[36] Hatmaker, supra note 23.

[37] Id.

[38] See S. 847, 116th Cong. (2019).

[39] Id.

Important: Read our blog and commenting guidelines before using the USF Blogs network.

Skip to toolbar