How to build a temporary panopticon
Almost exactly one year ago, The New York Times opinion department launched The Privacy Project, an extensive series of articles investigating the myriad ways in which the surveillance apparatuses of states and private enterprises have eroded citizens' right to privacy. The newspaper published pieces on the risks inherent to Slack's encryption and message retention policies, Facebook's attempt to build a new currency, stores' use of Bluetooth beacons to identify customers' shopping preferences, and many others.
On December 19, as part of the continuing series, the Times published a thorough exposé on the ubiquity of mobile phone location tracking, revealing that even President Trump's whereabouts were being inadvertently exposed to shadowy data location brokers.
Less than two weeks later, the World Health Organization received reports of a new virus circulating in Wuhan, China.
To mangle an Ernest Hemingway line, building a surveillance state happens in two ways: gradually, then suddenly. While the coronavirus has upended the way of life of billions of people, it has had a similarly momentous effect on the endless debate over security versus privacy. Among the most notable about-faces in this arena is that of Maciej Cegłowski, a technologist whose speeches and articles have for years prominently beat the drum on the threat of surveillance capitalism.
On March 23, Cegłowski wrote an essay titled "We Need a Massive Surveillance Program." Making the case for harnessing the decentralized array of private enterprises' location data assets into a single, massive database usable by the government to enable COVID-19 contact tracing (and even to enforce quarantines), Cegłowski argues: "Warning people about [the dangers of permanent, ubiquitous data collection] today is like being concerned about black mold growing in the basement when the house is on fire. Yes, in the long run the elevated humidity poses a structural risk that may make the house uninhabitable, or at least a place no one wants to live. But right now, the house is on fire. We need to pour water on it."
Indeed, some nations are already proverbially flooding the house. Israel's spy agency, Shin Bet (their FBI equivalent), recently surprised many Israeli citizens by using its harvested location data to text people's phones and alert them that they may have been exposed to someone with COVID-19. China recently mandated the use of a new app in Hangzhou that opaquely classifies users into green (safe to move around), yellow (may require self-isolation), or red (two-week quarantine) groups, provoking "fear and bewilderment among those who are ordered to isolate themselves and have no idea why." The U.K. has announced plans to launch its own contact-tracing mobile app as well. This patchwork of national surveillance policies has moved the Overton window of acceptable data use far afield of its habitual position. (The Times' Privacy Project, incidentally, went silent around the same time the coronavirus became a national story: it hasn't published a new piece since February 18.)
So perhaps it should come as no surprise that, on April 10, Apple and Google announced an unprecedented collaboration to enable contact tracing at scale. At a high level the concept is quite simple. Your mobile phone -- if you opt in -- would constantly be doing two things: 1) broadcasting an anonymous identifier to all nearby phones (using Bluetooth Low Energy) and 2) scanning nearby phones to save their broadcasted anonymous identifiers to your own phone. All of this data is kept locally on each device (not stored in, or even transmitted to, any centralized server), so there would no centralized record anywhere of everyone's movements, location, and/or contacts with other individuals.
Then, if down the road you test positive for COVID-19, you notify your mobile app to upload your phone's rotating identifiers from the last 14 days to a central server. All phones that have opted in to this contact-tracing effort would be contacting that central server at regular intervals to obtain the updated list of identifiers belonging to infected individuals. If that list has any matches with your phone's local database of identifiers that were recently near your phone, then you'd receive an alert saying you were potentially exposed to the coronavirus.
There are potentially many upsides to a system like this. Provided that Google and Apple can nudge a significant chunk of their respective mobile OS' users to opt in to broadcasting and scanning, this could automate a large part of the manual labor associated with conventional contact tracing: interviewing an infected individual about all of their recent social interactions, then individually contacting each of those people to ask them to self-isolate.
Of course, there are significant challenges around all of these steps, not least the final one: reporting a newly infected individual to the centralized server. Allowing any user to declare themselves positive for COVID-19 would immediately invite trolling en masse (i.e. falsely reporting a nonexistent infection), which would cause all sorts of pandemonium for anyone who was recently in the vicinity of the troll. Indeed, other trolling possibilities exist on the broadcasting side too. As security specialist Ross Anderson puts it: "The performance art people will tie a phone to a dog and let it run around the park; the Russians will use the app to run service-denial attacks and spread panic; and little Johnny will self-report symptoms to get the whole school sent home." So at the very least, it seems that a whitelist of accredited healthcare providers would need to be empowered to submit any positive infection report.
This is to say nothing of the limitations of Bluetooth itself, which -- as Anderson notes -- "goes through plasterboard...A supervisor might sit in a teaching room with two or three students, all more than 2m apart and maybe wearing masks, and the window open. The bluetooth app will flag up not just the others in the room but people in the next room too." In necessarily crowded places (e.g. a grocery store) and in wide-open spaces (e.g. a public park with no obstacles obstructing Bluetooth signals), this could lead to many false positives. When all of those people receive alerts on their phone, will they overwhelm the healthcare system with calls and visits?
False negatives are a related concern: contact tracing won't pick up on viral transmission via contaminated materials. If I'm infected and touch a metal railing and then you (an uninfected person) touch that same railing several hours later after I've already left, you would receive no alert even if I dutifully report myself as COVID-19-positive.
There are, unsurprisingly, numerous privacy implications as well. Ashkan Soltani, a cybersecurity expert, warns that "as you roll out these voluntary solutions and they gain adoption, it’s more likely that they are going to become compulsory." French cryptographer Serge Vaudanay, analyzing a somewhat analogous European contact-tracing proposal (called DP3T), highlights the potential consequences of its privacy vulnerabilities: "sick and reported people may be deanonymized, private encounters may be revealed, and people may be coerced to reveal the private data they collect."
Ross Anderson declares himself conflicted, acknowledging "the overwhelming force of the public-health arguments for a centralised system, but I also have 25 years’ experience of the NHS being incompetent at developing systems and repeatedly breaking their privacy promises when they do manage to collect some data of value to somebody else." This concern may already have proved eerily prescient: an explosive Guardian article published just today reveals that an internal UK government memo on a proposed NHS contact-tracing mobile app "said ministers might be given the ability to order 'de-anonymisation' to identify people from their smartphones."
In theory, the Apple/Google proposal would mitigate some of these privacy risks via its cryptographic protocol, which features several methods designed to disable persistent device identification and centralized movement and contact data. But it is not too difficult to imagine a hypothetical world several months down the line -- a world in which the virus has continued to spread and not enough users have opted into the contact-tracing app to make it useful -- where suddenly participation is switched to opt-out, rather than opt-in, or perhaps no choice is offered anymore at all.
Short of never updating your OS, leaving your phone at home, or reverting back to a flip phone, what real choice would most users have to avoid this pandemic-era panopticon? And how could we prevent the underlying framework -- not simply the APIs but also the proverbial Rubicon-crossing step of automatically sharing our device data with the world at large -- from metastasizing into a narrower right to personal privacy at large? COVID-19 has forced us all to grapple with previously ignored cracks in our healthcare systems, our social safety nets, and our gross economic inequities. It is perhaps a bitter inevitability of this era that, like so much else in the news, the coronavirus is also a story about privacy in the era of surveillance technology.