by bhoyt | Jul 31, 2020 | Legislation
Written by: Bryce Hoyt
Historical and Legal Overview
On June 30, China passed a controversial national security law for Hong Kong that aims at giving more power to mainland China over the semi-independent city. The power to create such law is found within the mini constitution of Hong Kong called the Basic Law—which gives China back door control over the quasi-independent city.[1] The national security law follows a string of attempts in 2003 and 2019 by Hong Kong to create a national security law to which the immense pushback by citizens resulted in multiple failed attempts and dead resolutions.[2] China argues the law is necessary to uphold the sovereignty of the mainland, but many citizens of Hong Kong are worried that their independent freedoms are quickly being taken away—diminishing the “one country-two systems” model that China had promised before it regained control of the city from the United Kingdom in 1997.[3]
The law, which was not announced to the public until after it was passed, now makes it a crime to support Hong Kong independence and potentially makes vandalizing public property or government premises a terrorist activity.[4] The law also allows mainland China to operate in Hong Kong for the first time and trumps any and all conflicting local laws within Hong Kong.[5] The law is applicable to “any person” in Hong Kong and most of the crimes stated in the law are defined very broadly. Among the crimes listed, there are four new offences—secession (breaking away from the country), subversion (undermining the power or authority of the central government), terrorism (using violence or intimidation against people), and collusion with foreign powers.[6]
If a foreign national violates the law oversees, they may still be charged if they ever return to the city.[7] Beijing plans to establish a national security office in Hong Kong staffed by mainland officials who will oversee enforcement of the law. Hong Kong courts will oversee national security cases. However, Beijing has the power to take over the case and all decisions cannot be legally challenged.[8] If a case involves “state secrets or public order,” it will face a closed-door trial with no jury.[9] Any person found in violation of the law may be subject to a maximum sentence of life imprisonment.[10] Hong Kong is also mandated to carry out education on national security including new educational curriculum and social organizations with the purpose of informing the public of the importance of national security.[11]
How does this impact Privacy?
Under Article 9, enforcement of the new law allows police to employ covert online surveillance and wiretap those suspected of any crimes.[12] Being that the crimes are so broadly defined and contain such severe consequences, many Hong Kong residents fear that any information posted on social media sites (many of which are completely banned in mainland China) which reference Hong Kong independence or criticism of the government, may be used as evidence of subversion or secession.[13] This has led many people to begin scrubbing their social media accounts and deleting their online presence, much of which contains online records of political debate and criticism of mainland China. China has made it very clear that this law will be strictly and routinely enforced, having already arrested multiple people who protested the enactment of the law.[14]
The Tech Standoff
Many international tech companies, along with most Hong Kong residents, view this as a war of free speech and political and economic independence and fear that the online threat of China’s censorship is now being draped over Hong Kong.[15] Shortly after the law was announced, tech companies including Facebook, Google, Twitter, Zoom and LinkedIn have stated they would temporarily stop complying with requests for user data from the Hong Kong authorities—in violation of Article 43 of the new security law.[16] Hong Kong has responded by threatening jail time for company employees for noncompliance with the law.[17] In efforts to take control of protests in 2019, Hong Kong had requested help from Google in taking down online posts expressing support of independence and any leaked police information.[18] At the time, Google said no. However, the new law could now punish Google with fines, equipment seizures, and arrests if it declines such requests again.[19] TikTok, which is owned by the Chinese internet platform ByteDance but managed mostly outside of China, announced that it would withdraw from stores in Hong Kong and make the app inoperable to users in the city within the next few days.[20] According to an official from the Internet Society of Hong Kong, a non-profit dedicated to the open development of the internet within the city, there may be technical actions companies can take to guard against the law.[21] The law states that a request for data may be avoided if the technology required to comply is “not reasonably available,” which means companies may be able to add levels of encryption to the data or store the content in multiple locations to make the process of complying overly burdensome.[22]
Many other small businesses have either shut down or developed plans for moving out of Hong Kong, but for tech giants like Amazon and Google who have large data centers in Hong Kong, this is likely not a practical solution.[23] As the war for online freedom continues, the chilling effects to privacy and personal autonomy cannot be understated.
[1] Hong Kong: What is the Basic Law and how does it work?, BBC (Nov. 20, 2019), https://www.bbc.com/news/world-asia-china-49633862.
[2] See Jessie Yeung, China has passed a controversial national security law in Hong Kong. Here’s what you need to know, CNN (July 1, 2020), https://www.cnn.com/2020/06/25/asia/hong-kong-national-security-law-explainer-intl-hnk-scli/index.html.
[3] Id.
[4] See Grace Tsoi, Lam Cho Wai, Hong Kong security law: What is it and is it worrying?, BBC (June 30, 2020), https://www.bbc.com/news/world-asia-china-52765838.
[5] Yeung, supra note 2.
[6] Tsoi, supra note 4.
[7] Yeung, supra note 2.
[8] Id.
[9] Id.
[10] Id.
[11] Id.
[12] See Rita Liao, The tech industry comes to grips with Hong Kong’s national security law, TechCrunch (July 8, 2020), https://techcrunch.com/2020/07/08/hong-kong-national-security-law-impact-on-tech/, See also HK National Security Law -Bilingual, China Law Translate, https://www.chinalawtranslate.com/bilingual-hong-kong-national-security-law/.
[13] See Chris Buckley, Keith Bradsher, Tiffany May, New Secuirty Law Gives China Sweeping Powers Over Hong Kong, The New York Times (June 29, 2020), https://www.nytimes.com/2020/06/29/world/asia/china-hong-kong-security-law-rules.html. See also List of websites blocked in mainland China, Wikipedia, https://en.wikipedia.org/wiki/List_of_websites_blocked_in_mainland_China.
[14] Austin Ramzey, Elaine Yu, Tiffany May, Hong Kong Is Keeping Pro-Democracy Candidates Out of Its Election, The New York Times (July 29, 2020), https://www.nytimes.com/2020/07/29/world/asia/hong-kong-arrests-security-law.html.
[15] Paul Mozur, In Hong Kong, a Proxy Battle Over Internet Freedom Begins, The New York Times (July 7, 2020), https://www.nytimes.com/2020/07/07/business/hong-kong-security-law-tech.html.
[16] Id.
[17] Id.
[18] Id.
[19] Id.
[20] Paul Mozur, Tik Tok to Withdraw From Hong Kong as Tech Giants Halt Data Requests, The New York Times (July 6, 2020), https://www.nytimes.com/2020/07/06/technology/tiktok-google-facebook-twitter-hong-kong.html.
[21] Mozur, supra note 15.
[22] Id.
[23] Id.
by bhoyt | Jul 10, 2020 | Biometrics
A State by State Determination
There are currently no known U.S. employers that require employees to have a device implanted on their person as a condition of employment.[1] However, many states are implementing preemptive legislation to prohibit such a thing from becoming a possibility.[2] Most recently, the state of Michigan introduced the “Microchip Protection Act,” which passed the House and now heads to the Senate for further consideration.[3] The act would prevent employers from requiring employees to have devices implanted into their bodies as a condition for employment, and prohibits employers from discriminating against employees who refuse.[4]
Usually, a microchip refers to a Radio Frequency Identification (RFID) tag. This wireless technology consists of a tiny radio transmitter which communicates its unique identity to nearby readers using electromagnetic waves.[5] Common uses of the technology include inventory tracking, key fobs to open your car door, automatic payment passes used in toll booths, building access systems, and even payment and ID cards.[6]
Beginning in 2017, a company named Three Square Market located in River Falls, Wisconsin started offering RFID implants to employees—80 of the 250 employees agreed to have the chip installed.[7] The chip is roughly the size of a grain of rice and is implanted under the skin between the thumb and the forefinger. The CEO of Three Square Market said the idea came after a trip to Sweden where he noticed many people having chips implanted to do things like enter secure buildings and book train tickets.[8] The chip is intended to simplify tasks such as assessing the building, logging into the computer, and buying food and drink items in the cafeteria.[9] So far, nearly a third of the company has the chip installed and everyone seems to be enjoying the convenience. Two people have had their chips removed upon leaving the organization.
The company behind the commercially produced, implantable microchip is named VeriChip and was given FDA approval back in 2004 to implant the chip for various purposes.[10] The company claims the chips cannot be counterfeited and advocates for use in the health care industry as well as the private sector. So far, the chip is commonly used for identifying patients whose identity is difficult to establish and the chip is often installed on household pets to attach the owner’s information to the pet if it wonders off.[11]
The privacy concerns associated with the use of implantable RFID chips involves the potential leaking, stealing, or spoofing of information stored on the chip. Although very little information is stored on these chips, usually only a unique identification number, access to that unique number paired with other information can expose the underlying sensitive data.[12] Spoofing is also a real issue—for example, the unique identifier in your credit card may be read by a third-party reader and duplicated to be used without your authorization.[13] Another fear is that many of these supply chain databases which store information about their products RFID movement, may be able to profile consumers based upon where the tag on their products has traveled.[14] Although the tags do not have GPS capability and cannot track location in real time, by comparing where the signals of the RFID chip have been received, you may deduce where the person has been.[15]
Solutions to these concerns include the use of encryption methods such as “one-way hashing” which creates a unique meta-ID that can only be read between two parties.[16] This would mean that the RFID is in a locked state until it receives an exact signal of the same hash value and could then be temporarily unlocked for use of nearby readers. Physical shielding sleeves that block radio signals are also used to store passports and credit cards so that the RFID cannot communicate with a reader until taken out of the sleeve. However, with implantable RFIDs, encryption is obviously the best method of protection.
From a legislative point of view, over 10 states have passed laws prohibiting employers from requiring an implant of any kind. Additionally, around 20 states have laws on the books that deal with regulating the use of RFID. Commonly, the laws state that it shall be unlawful for a person to remotely read a person’s identification using RFID, without that person’s knowledge and prior consent.[17] States like Illinois have made it a felony identity theft crime to possess or transfer a RFID device capable of obtaining personally identifiable information (PII) from a RFID tag, with knowledge that the device will be used by an individual to commit such crime.[18] States like Minnesota, Michigan, and Washington have required that the new enhanced drivers licenses containing RFID tags contain reasonable security measures to protect against unauthorized disclosure of PII.
So far, it appears that RFID chips are treated just like any other sensitive identification card you would have in your wallet. It is illegal for anyone to steal or access the technology without your consent, but there is now a heighted risk of your information being stolen without reasonable security measures in place—it is easier to steal a radio signal you cannot feel than it is to take your wallet or purse off your person. In short, although your personal identification number feels safe implanted beneath the palm of your hand, its signal is susceptible to more danger than ever before.
[1] Dave Royse, States Just Saying No to Employee Microchipping, LexisNexis (Mar. 13, 2020), https://www.lexisnexis.com/en-us/products/state-net/news/2020/03/13/states-just-saying-no.page.
[2] Id.
[3] Rep. Kahle’s plan to prohibit employers from requiring microchipping for workers in Michigan passes House, Michigan House Republicans (June 24, 2020), http://gophouse.org/rep-kahles-plan-to-make-microchipping-in-michigan-voluntary-for-workers-and-job-providers-passes-house-unanimously/.
[4] See HB 5672 (June 24, 2020)
[5] Radio Frequency Identification (RFID), U.S. Food and Drug Administration, https://www.fda.gov/radiation-emitting-products/electromagnetic-compatibility-emc/radio-frequency-identification-rfid.
[6] See Gavin Phillips, How Does RFID Technology Work?, makeuseof (May 31, 2017), https://www.makeuseof.com/tag/technology-explained-how-do-rfid-tags-work/.
[7] Rachel Metz, This company embeds microships in its employees, and they love it, MIT Technology Review (Aug. 17, 2018), https://www.technologyreview.com/2018/08/17/140994/this-company-embeds-microchips-in-its-employees-and-they-love-it/.
[8] Id.
[9] Id.
[10] John Halamka, The Security Implications of VeriChip Cloning, Journal of the American Medical informatics Association (Nov. 2006), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1656959/.
[11] Id.
[12] See Phillips, supra note 6.
[13] See Halamka, supra note 10.
[14] Id.
[15] Id.
[16] RFID Security and Privacy White Paper, Department of Homeland Security, https://www.dhs.gov/xlibrary/assets/foia/US-VISIT_RFIDattachE.pdf.
[17] See Radio Frequency Identification (RFID) Privacy State Law Survey, LexisNexis (Nov. 18, 2019), https://advance.lexis.com/api/permalink/ba1e4165-8d6d-41d0-a4e0-648984540d51/?context=1000522.
[18] Id.
by bhoyt | Jun 20, 2020 | Facial Recognition
Written by: Bryce Hoyt
Beginning in 2017, Australian tech entrepreneur Hoan Ton-That founded a startup backed by billionaire Peter Thiel by the name of Clearview AI (Clearview) with the goal of creating a cutting-edge facial recognition technology.[1] Two years later, Clearview emerged with the refined technology and began selling it to law enforcement agencies and private investigators all around the U.S. and Canada.[2] The technology works by uploading a picture of a suspected criminal to the software, a sophisticated algorithm then automatically compares the picture to the Clearview database of over 3 billion photos scraped from publicly available pictures online (e.g., social media sites) to try and discover the person’s identity using unique biometric indicators such as distance between eyes or shape of the chin.[3] If a match is found, the matching images are presented alongside the social media links where they were found.[4]
So far, over 600 law enforcement agencies in North America have started using the Clearview software with the goal of helping solve shoplifting, identity theft, credit card fraud, murder, and child sexual exploitation cases.[5] Law enforcement are only permitted to use the technology as a lead and cannot yet use the facial recognition technology as evidence in court.[6] Ton-That claims the software has 99% accuracy and does not result in higher errors when searching people of color, a common issue and concern among other facial recognition tools.[7]
Although Ton-That continues to remind the public that this tool is only used for investigative purposes to solve crimes—many people remain skeptical. New Jersey’s attorney general Gurbir Grewal said he was disturbed when he learned about Clearview and ordered law enforcement in the state to stop using the technology until a full review of the company is completed for data privacy and cybersecurity concerns.[8] Additional reports have indicated that Clearview has given access to other clients, including commercial business and billionaires.[9] Ton-That denies any commercial authorization of Clearview, however, the fear remains.
Such a controversial and unprecedented technology does not come without legal ramifications and investigation. Tech giants including Twitter, Google, YouTube, and Facebook have sent cease-and-desist letters to Clearview for scraping their data, echoing the 2018 Cambridge Analytica scandal.[10] Ton-That defends the collection of data, claiming that because the pictures are taken from the public domain, Clearview has a First Amendment right to the publicly available information.[11]
Tech giants aren’t the only ones challenging Clearview’s practices—in January of this year the American Civil Liberties Union (ACLU) filed a class action lawsuit against Clearview in Illinois, one of the only states with a biometric privacy law.[12] The complaint alleges a violation of the Illinois Biometric Information Privacy Act (BIPA) for failing to obtain informed written consent by individuals before collecting and using a person’s biometric data, including facial recognition, as required by the act.[13] The ACLU expressed its’ concern with Clearview, claiming that such a powerful and unregulated technology might lead to governmental tracking of vulnerable communities such as sexual assault victims and undocumented immigrants—which is the exact sort of behavior privacy legislation is intended to prevent.[14] The ACLU is seeking a court order to force Clearview to delete all photos of Illinois residents gathered without consent and to prevent any further gathering until the organization is in compliance with the BIPA.[15] Clearview would not be the first organization to have violated the BIPA. This January, Facebook paid a $550 million class action settlement for a violation of BIPA involving it’s “photo tagging” feature, after losing their appeal in the Ninth Circuit in 2019.[16]
The fears of the ACLU are not unfounded, law enforcement agencies across North America have started using Clearview to identify children as young as 13 years old who are victims of sexual assault to try to locate them and attempt to get a statement.[17] Many supporters of the technology claim that it’s the biggest breakthrough in the last decade for child sexual abuse crimes, but many worry of the potential harms in amassing such sensitive data.[18] Privacy advocates remain reluctant to support such technology until it is tested and regulated. Liz O’Sullivan, the technology director at the Surveillance Technology Oversight Project commented, “[t]he exchange of freedom and privacy for some early anecdotal evidence that it might help some people is wholly insufficient to trade away our civil liberties.”[19]
Beyond Clearview, facial recognition software has moved to commercial use—including airports, public venues, and most recently, public schools.[20] The small town of Lockport, New York was one of the first known public schools to adopt facial recognition in the U.S., despite pushback from the community.[21] The technology was installed with the purpose of scanning for weapons and monitoring individuals entering the school; comparing faces to a curated database of prohibited individuals such as sex offenders and barred students/employees.[22] A few cities, including San Francisco, have banned the use of facial recognition tools in their community, even within law enforcement agencies.[23] Although well intentioned, the unique technology presents many privacy concerns that are better off discussed and reconciled before being implemented as common practice.
With facial recognition in the spotlight and a growing concern of the unintended repercussions, a few tech companies including IBM have announced that they will no longer sell facial recognition services—urging for a national dialogue on whether the technology should be used at all.[24] Critics of this public statement note that an additional motive stems from the fact that facial recognition software has not been profitable for IBM up to this point.[25] It also remains unclear whether IBM will continue to research and develop such technology after halting sales. Amazon also announced that they are placing a one-year moratorium on police use of its facial recognition technology due to the current pushback from civil rights groups and police-reform advocates.[26] Microsoft also followed suit in a statement the same week, stating they will no longer sell facial recognition software to police in the U.S. until there is a federal law to regulate the technology.[27]
Facial recognition technology has also gained attention from legislators, resulting in numerous state bills and proposed federal legislation.[28] Among the bills currently circulating at a state level, a controversial bill in California aimed at allowing businesses and government agencies to use facial recognition technology without consent for safety and security purposes with probable cause, has stalled in the legislature.[29] The bill would have also followed the California Consumer Privacy Act (CCPA) by requiring state and local agencies to inform consumers of the facial recognition technology before using it for reasons not related to public safety.[30] Those in opposition of the bill include the ACLU and the Electronic Frontier Foundation (EFF), who claim that the bill would have set very minimal standards for the use of the technology and did not address many of the privacy concerns related to face surveillance.[31]
As of March, Washington state has enacted the first U.S. state law which limits the use of facial recognition technology by law enforcement.[32] The state law (SB 6280), backed by Microsoft, sets limits on the use of facial recognition technology in a few ways: (1) governmental agencies must now obtain a warrant to run facial recognition scans (except in exigent circumstances), (2) the software must pass independent testing to ensure its accuracy, (3) any state or local government agency intending to use such technology must file with a legislative authority a notice of intent to develop, procure, or use a facial recognition service, specifying the purpose for which the technology is to be used, and (4) a state or local government agency intending to use such technology must develop a comprehensive accountability report outlining the purpose of the use, the type of data the technology collects, and various other clarifications on protocol.[33]
Critics of the bill point out that it was sponsored by State Senator Joe Nguyen, who is currently employed at Microsoft, which is perhaps the reason that the bill places far less restrictions on commercial development or sale of the technology.[34] The ACLU was also quick to make a rebuttal to the bill, stating that although the safeguards proffered in the bill are better than none, anything short of a facial recognition ban will not safeguard civil liberties.[35]
At a federal level, a bipartisan bill has been introduced to the Senate referred to as the “Commercial Facial Recognition Privacy Act,” designed to offer legislative oversight for commercial applications of facial recognition technology.[36] The bill would require companies to gain explicit user consent before collecting any facial recognition data and would limit companies from sharing the data with third-parties.[37] The bill, also endorsed by Microsoft, seems to address the commercial side of facial recognition technology that Washington state’s law fails to acknowledge. The consent requirements mimic many other privacy laws—requiring that a company obtain affirmative consent before using the technology, provide the user with concise notice of the capabilities and limitations of the technology, state the specific purpose for which the technology is being employed and a provide a brief description of the data retention and deidentification practices of the processor.[38] The company is thereby limited to the purpose for which they informed the user and must obtain additional affirmative consent if they wish to share the data with a third party or re-purpose the data.[39]
Regardless of whether the bill survives, the proposed legislation provides insight into the mind of Congress and outlines the willingness of tech giants to help navigate a more informed and regulated route through new technological advances such as facial recognition. The macro and micro consequences of such an innovative yet frightening tool are worth skepticism, but perhaps we can find the middle ground between civil advocates and fast-pace tech executives to forge a more privacy conscious future.
[1] See Kashmir Hill, The Secretive Company That Might End Privacy as We Know It, The New York Times (Jan. 18, 2020), https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html.
[2] Id.
[3] See Donie O’Sullivan, This man says he’s stockpiling billions of our photos, CNN (Feb. 10, 2020), https://www.cnn.com/2020/02/10/tech/clearview-ai-ceo-hoan-ton-that/index.html.
[4] Id.
[5] Hill, supra note 1.
[6] O’Sullivan, supra note 3.
[7] Id.
[8] Id.
[9] See Kashmir Hill, Before Clearview Became a Police Tool, It Was a Secret Plaything of the Rich, The New York Times (Mar. 5, 2020), https://www.nytimes.com/2020/03/05/technology/clearview-investors.html. See also Ben Gilbert, Clearview AI scraped billions of photos from social media to build a facial recognition app that can ID anyone — here’s everything you need to know about the mysterious company, Business Insider (Mar. 6, 2020), https://www.businessinsider.com/what-is-clearview-ai-controversial-facial-recognition-startup-2020-3.
[10] Alfred Ng, Steven Musil, Clearview AI hit with cease-and-desist from Google, Facebook over facial recognition collection, cnet (Feb. 5, 2020), https://www.cnet.com/news/clearview-ai-hit-with-cease-and-desist-from-google-over-facial-recognition-collection/.
[11] Id.
[12] See Alfred Ng, Clearview AI faces lawsuit over gathering people’s images without consent, cnet (May 28, 2020), https://www.cnet.com/news/clearview-ai-faces-lawsuit-over-gathering-peoples-images-without-consent/. See also Angelique Carson, ACLU files class-action vs. Clearview AI under biometric privacy law, iapp (May 29, 2020), https://iapp.org/news/a/aclu-files-class-action-vs-clearview-ai-under-biometric-privacy-law/.
[13] Id.
[14] Ng, supra note 8.
[15] Id.
[16] See Patel v. Facebook, Inc., 932 F.3d 1264 (9th Cir. 2019). See also Corrine Reichert, Facebook pays $550M to settle facial recognition privacy lawsuit, cnet (Jan. 29, 2020), https://www.cnet.com/news/facebook-pays-up-550m-for-facial-recognition-privacy-lawsuit/.
[17] Kashmir Hill, Gabriel J.X. Dance, Clearview’s Facial Recognition App Is Identifying Child Victims of Abuse, The New York Times (Feb. 10, 2020), https://www.nytimes.com/2020/02/07/business/clearview-facial-recognition-child-sexual-abuse.html.
[18] Id.
[19] Id.
[20] Davey Alba, Facial Recognition Moves Into a New Front: Schools, The New York Times (Feb. 6, 2020), https://www.nytimes.com/2020/02/06/business/facial-recognition-schools.html.
[21] Id.
[22] Id.
[23] Id.
[24] Devin Coldewey, IBM ends all facial recognition business as CEO calls out bias and inequality, TechCrunch (June 8, 2020), https://techcrunch.com/2020/06/08/ibm-ends-all-facial-recognition-work-as-ceo-calls-out-bias-and-inequality/.
[25] Id.
[26] Bobby Allyn, Amazon Halts Police Use Of Its Facial Recognition Technology, npr (Jun. 10, 2020), https://www.npr.org/2020/06/10/874418013/amazon-halts-police-use-of-its-facial-recognition-technology.
[27] Brian Fung, Tech companies push for nationwide facial recognition law. Now comes the hard part, CNN (June 13, 2020), https://www.cnn.com/2020/06/13/tech/facial-recognition-policy/index.html.
[28] See Taylor Hatmaker, Bipartisan bill proposes oversight for commercial facial recognition, TechCrunch (Mar. 14, 2019), https://techcrunch.com/2019/03/14/facial-recognition-bill-commercial-facial-recognition-privacy-act/.
[29] Ryan Johnston, Facial recognition bill falls flat in California legislature, statescoop (Jun. 4, 2020), https://statescoop.com/facial-recognition-bill-falls-flat-in-california-legislature/.
[30] Id.
[31] Id.
[32] Paresh Dave, Jeffrey Dastin, Washington State signs facial recognition curbs into law; critics want ban, Reuters (Mar. 31, 2020), https://www.reuters.com/article/us-washington-tech/washington-state-signs-facial-recognition-curbs-into-law-critics-want-ban-idUSKBN21I3AS.
[33] See S.B. 6280, 66th Leg., Reg. Sess. (Wash. 2020).
[34] Dave Gershgorn, A Microsoft Employee Literally Wrote Washington’s Facial Recognition Law, OneZero (Apr. 2, 2020), https://onezero.medium.com/a-microsoft-employee-literally-wrote-washingtons-facial-recognition-legislation-aab950396927.
[35] Id.
[36] Hatmaker, supra note 23.
[37] Id.
[38] See S. 847, 116th Cong. (2019).
[39] Id.
by bhoyt | Jun 5, 2020 | Cybersecurity
Written By: Bryce Hoyt
In the wake of all the massive changes due to COVID-19, the IAPP (International Association of Privacy Professionals) partnered with EY (Ernst & Young) to launch a research initiative to gain more insight into the unique ways privacy and data protection practices have been affected by the pandemic. They conducted a survey on a total of 933 privacy professionals between April 8th and 20th.[1] Although working remotely was not entirely unfamiliar for many people, according to their findings, 45% of organizations have adopted a new technology or contracted with a new vendor to enable remote work due to the pandemic.[2]
Due to the severity and urgency of combating such a pandemic resulting in “stay at home orders,” around 60% of organizations rolling out new “working from home” (WFH) technology either skipped or expedited a privacy or security review.[3] On top of existing obligations, the pandemic demanded privacy professionals to add an array of new concerns to their agenda. When asked how organizations’ priorities have changed, about half (48%) said that safeguarding against attacks and threats has become more of a priority for them.[4] Understandably, many otherwise cautious citizens are now required to navigate most of their life through a technological space that is somewhat unfamiliar, not to mention, likely on a less secure at-home network.
Unsurprisingly, a recent study by the Information Systems Audit and Control Association found that many companies are seeing an increase in the number of cyberattacks since the pandemic began.[5] Additionally, since January 1st the FTC has received over 61,000 reports amounting to over $45 million in total fraud losses.[6] The top four categories of complaints include, (1) travel and vacation related reports about cancellations and refunds, (2) reports about online shopping issues, (3) mobile texting scams, and (4) government and business imposter scams.[7] Many of the phishing scams have targeted college students and international supply chain companies.[8] The scam often takes the form of an email, claiming to provide important information and resources relating to things such as the coronavirus relief fund (CARES Act) or providing fake health advice or vaccine information from the Center for Disease Control (CDC).[9] These emails often have you “login” through an unprotected link where they obtain your personal information or have you download a document which installs a form of malware to your desktop and can further obtain personal information and track your activity.[10]
Hacking has also been on the rise—now targeting organizations in the healthcare sector. Among those who have been attacked, the University of California San Francisco (UCSF), who has been instrumental in sampling and antibody testing for COVID-19, has confirmed that it was the target of a ransomware attack.[11] Ransomware attacks generally gain access to secured information and threaten to publish or delete the data unless a monetary payment is made.[12] Additionally, the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI) issued a Public Service Announcement warning organizations researching COVID-19 that they may have been compromised by Chinese cyber threat actors.[13] It appears the race to find a cure has resulted in international intelligence gathering and potential intellectual property theft, however, most of these incidents are still under investigation.
Along with the embarrassing unintended consequences that result from working behind a webcam at home—additional privacy concerns arise when having otherwise protected and privileged conversations at work, are now done at home using a virtual program. For example, therapy sessions, confidential business meetings, college courses/exams, and court hearings are all being held online and the reliability of protection of that data is being questioned.[14] There’s a saying in Silicon Valley, “[i]f the product is free, you are the product.”[15] Many of the videoconferencing companies have been quickly trying to adjust and adapt to the rapid demand and concern for their product, battling complaints and even lawsuits for alleged faulty data protection.[16]
The standout brand Zoom, who we’ve all become familiar with, experienced a surge of 200 million users in March compared to just 10 million the previous year.[17] Despite many companies seeking to extend the enforcement date of The California Consumer Privacy Act (CCPA) out of fear that they are not prepared to deal with the potential data requests due to coronavirus—California’s Attorney General Xavier Becerra’s office has made it clear that enforcement is still set to begin on July 1.[18] Furthermore, the European Data Protection Board (EDPB) released a statement regarding the processing of personal data in the context of the pandemic, clarifying the role of the General Data Protection Regulation (GDPR) during this emergency.[19] The statement emphasized the lawfulness of processing personal data in the context of such an emergency, reiterating provisions such as Article 23—which allows competent public health authorities and employers to process otherwise protected health data for reasons of substantial public interest as it relates to public health.[20] This means that companies are permitted to collect and share information relevant to their employees status of COVID-19 to ensure public safety, so long as such collection is properly limited and not communicated beyond what’s necessary; urging companies to aggregate and anonymize the data when possible.[21] According to those surveyed by the IAPP, about 19% of organizations have shared the names of staff diagnosed with COVID-19 with a third party.[22]
Moving forward, organizations and privacy professionals are working around the clock to ensure compliance with privacy legislation like the GDPR and CCPA and are attempting to resolve the issues above as quickly as possible. For example, Google is working with the World Health Organization (WHO) to implement safeguards against the new phishing and malware threats.[23] The FTC is also increasing its efforts to raise awareness of these scams, creating new guides and resources for the general public to better navigate the “new normal.”[24] The FTC is also sending warning letters to any company falsely promoting a cure or treatment for COVID-19, creating a list of all companies making false claims.[25] The Senate also announced they intend to introduce federal privacy legislation that will preempt state privacy laws, coined the “COVID-19 Consumer Data Protection Act.”[26] This act is intended to help regulate the data collection and processing of personal information in connection with the pandemic.[27]
The balancing act between privacy and pandemic interests carries on and only time will tell the reasonableness of the response. In the meantime, governments and privacy professionals are keeping an eye on the new technologies being implemented such as thermal imaging, contact tracing, and video surveillance. Many of us remain hopeful that regardless of the efficacy of this emergency privacy legislation, there appears to be a growing societal and governmental concern and acknowledgment for protecting privacy interests.
[1] Müge Fazlioglu, Privacy in the Wake of COVID-19: Remote Work, Employee Health Monitoring and Data Sharing, International Association of Privacy Professionals (May 2020), https://iapp.org/media/pdf/resource_center/iapp_ey_privacy_in_wake_of_covid_19_report.pdf.
[2] Id. at 5.
[3] Id.
[4] Id.
[5] ISACA, ISACA Survey: Cybersecurity Attacks Are Rising During COVID-19, But Only Half of Organizations Say Their Security Teams Are Prepared for Them, ISACA (April 2020), https://www.isaca.org/why-isaca/about-us/newsroom/press-releases/2020/isaca-survey-cybersecurity-attacks-are-rising-during-covid-19.
[6] Fed. Trade Comm’n, Coronavirus (COVID-19) Consumer Complaint Data (2020), https://www.ftc.gov/system/files/attachments/coronavirus-covid-19-consumer-complaint-data/covid-19-daily-public-complaints-060220.pdf.
[7] Id.
[8] See Sherrod Degrippo, Coronavirus-themed Attacks Target Global Shipping Concerns, proofpoint (Feb. 10 2020), https://www.proofpoint.com/us/threat-insight/post/coronavirus-themed-attacks-target-global-shipping-concerns. See also Ari Lazarus, COVID-19 scams targeting college students, Fed. Trade Comm’n (May 27, 2020), https://www.consumer.ftc.gov/blog/2020/05/covid-19-scams-targeting-college-students.
[9] See Lazarus, supra note 8. See also Steve Symanovich, Coronavirus phishing emails: How to protect against COVID-19 scams, NortonLifeLock (2020), https://us.norton.com/internetsecurity-online-scams-coronavirus-phishing-scams.html.
[10] Id.
[11] Kartikay Mehrotra, Hackers Target California University Leading Covid-19 Research, Bloomberg (June 3, 2020), https://www.bloomberg.com/news/articles/2020-06-04/hackers-target-california-university-leading-covid-19-research.
[12] Id.
[13] Chinese Malicious Cyber Activity, Cybersecurity & Infrastructure Security Agency (2020), https://www.us-cert.gov/china.
[14] The Editorial Board, Privacy Cannot Be a Casualty of the Coronavirus, The New York Times (Apr. 7, 2020), https://www.nytimes.com/2020/04/07/opinion/digital-privacy-coronavirus.html.
[15] Id.
[16] Hurvitz v. Zoom Video Communications, Inc., No. 2:20-cv-03400, (C.D. Cal. Apr. 12, 2020), https://loevy-content-uploads.s3.amazonaws.com/uploads/2020/04/Todd-Hurvitz-et-al-v.-Zoom-et-al.pdf.
[17] The Editorial Board, supra note 11.
[18] Dustin Gardiner, Coronavirus sparks new fight over California’s internet privacy law, San Francisco Chronicle (May 5, 2020), https://www.sfchronicle.com/politics/article/Coronavirus-sparks-new-fight-over-California-s-15246541.php.
[19] Andrea Jelinek, Statement on the processing of personal data in the context of the COVID-19 outbreak, European Data Protection Board (Mar. 19, 2020), https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_statement_2020_processingpersonaldataandcovid-19_en.pdf.
[20] Id.
[21] Id.
[22] Fazlioglu, supra note 1.
[23] Kim Lyons, Google saw more than 18 million daily malware and phishing emails related to COVID-19 last week, The Verge (Apr. 16, 2020), https://www.theverge.com/2020/4/16/21223800/google-malware-phishing-covid-19-coronavirus-scams.
[24] Fed. Trade Comm’n, Coronavirus Advice for Consumers, Fed. Trade Comm’n (2020), https://www.ftc.gov/coronavirus/scams-consumer-advice.
[25] Lesley Fair, 45 more companies get coronavirus warning letters, Fed. Trade Comm’n (May 7, 2020), https://www.ftc.gov/news-events/blogs/business-blog/2020/05/45-more-companies-get-coronavirus-warning-letters.
[26] Glenn Brown, Senate to Introduce “COVID-19 Consumer Data Protection Act”, The National Law Review (May 6, 2020), https://www.natlawreview.com/article/senate-to-introduce-covid-19-consumer-data-protection-act.
[27] Id.