Since the advent of the COVID-19 pandemic in early 2020, a debate has raged in technology, privacy and legal quarters about balancing the upsides and downsides of contact tracing applications. Before the COVID-19 pandemic, contact tracing referred primarily to a non-technical process: the human detective work and notification protocols that helped stem the recent pandemics like Ebola (2014), Middle East Respiratory Syndrome (MERS–2012) and Severe Acute Respiratory Syndrome (SARS–2002).
But the scope and virulence of COVID-19, and the realization that took hold quickly about its potential to shut down productive work in major economies, made it clear that battlefield innovation was needed. Apple and Google, among others, pledged to work together on a (since launched) smartphone contact tracing app. Microshare and others adapted previous sensor technology to produce less costly and more easily deployed wearable solutions. By April, when the Financial Times featured Microshare Universal Contact Tracing on its front page, “contact tracing” had grown beyond its previous gumshoe definition.
The intervening months have been instructive. By and large, contact tracing via human inquiry has continued, but with mixed results that often conform to the contours of a particular society. In East Asia and the Middle East, with their previous experience of SARS and MERS, human contact tracers had success in getting people to answer questions. Indeed, more authoritarian states made such answers and the adoption of smartphone apps mandatory, and the results from the perspective of containment were positive.
In the West, the experience ranged from mixed to complete failure. It’s a generalization, but for the most part the more libertarian and federalist the society, the less effective efforts to compel people to reveal human contacts or to download contract tracing apps has proven to be. In these societies, the wearable approach has proven to be a more effective solution, less due to technical capacities than social ones. The simple matter is people in democratic societies are loath to take on the privacy risk of downloading a tracing app on their smartphone, and nothing screams “privacy risk” at the start of the 21st Century quite like a smartphone.
Wearables, besides being less prone to battery failure, are designed to work in specific defined areas: an office, a construction site, a factory, a university campus. Unlike smartphones, they have no potential to collect or leak PII (Personally Identifiable Information). Putting a wearable on means you understand you’re going to be traced. Taking it off when you leave the site means you know that has ended.
Can wearable contact tracing technologies be abused? Absolutely. The privacy laws of governments and policies of individual corporations or property owners are highly relevant here. Should a contact tracing wearable be used to tell how many smoke breaks a worker takes? Maybe, maybe not. What if there’s an allegation of sexual abuse in the workplace? Should chief counsels be able to view contact tracing data to see if there’s evidence of stalking?
All this is technically possible, and like all innovations – fraught with risk that must be mitigated going forward. Karl Benz probably did not give much thought to the car bomb when he invented the first practical automobile in 1885, nor did the Wright Brothers contemplate 9/11. Rules of the road and airline safety regimes grew up around these new capabilities, sometimes in the breach, but generally they have successfully mitigated the risks involved. The same thing must happen for contact tracing.
A question of balance
Is the value of the automobile worth the deaths from automobile accidents, the damage combustion engines do to the environment, and the occasional car bomb? The answer to that question is constantly being recalibrated but since the start of the 20th Century it has broadly been, “Yes.”
So how do we do the cost/benefit analysis when it comes to contact tracing technologies? From our perspective, we see the primary benefit as providing a way to focus resources (infection testing) and target interventions (voluntary at-home isolation, for instance) to slow the spread of a virus like COVID-19, or any other infectious disease. It is about protecting friends and family.
We view the primary downsides as entirely related to privacy: creating data streams which provide granular data on an individual’s location (current and historic) and of their social association with others (contacts and proximity).
The former offers benefits to society by saving lives through limiting the spread, optimizing resource uses by focusing them on the most likely needful, and maximizing the economic recovery by minimizing the impact on normal commercial activities. The promise is that we can balance the safety of individuals from disease while minimizing the economic downside of a lock-down.
The latter threatens to allow states, corporations, and hackers to abuse detailed information about the movement and association of people to target recriminations. In a time of social upheaval, this possibility is particularly frightening. Data that could be used to monitor the political activities of individuals provides the spector, if not the actuality, of suppression of expression. Indeed, in authoritarian China, the involuntary “social credit system” the country has imposed on its citizens is a terrifying manifestation of this fear.
And, of course, as the current crisis with COVID-19 abates due to the expanding availability of vaccines, does the scope of wearable and non-wearable sensing solutions abate until the next inevitable crisis? Or have we lost ground permanently on individual right to privacy?
Happily, in mature democratic societies, a robust if imperfect debate over any such encroachment on personal privacy is an ongoing reality. Inventions with privacy implications are not merely launched into a placid marketplace so much as they’re dropped into a raging river of skepticism, regulation, liability and public scrutiny. This has important implications for all measurements of individual behavior in the name of “wellness.” Will a rush to deploy overtake some of these concerns during an emergency period? Perhaps. But history shows society’s demands will quickly provide a counterbalance to any perceived abuses.
Societies and governments are still struggling to get the cost/benefit balance right for social media, ecommerce and many other digital innovations. Regulation, in particular, has lagged – but we see that as an inevitable result of the fact that the generation in power since the advent of the Internet has similarly lagged in its understanding of the power, reach and potential costs and benefits of the digital revolution.
As the baton passes from Baby Boomers and Gen Xers to Millennials, we’re already seeing the public sphere – the global common, if you will – strike back. Evidence the GDPR in the EU, the California Consumer Privacy Act (CCPA) in the US, and the broader emphasis on Environmental, Social and Governance (ESG) in the corporate and investment world. Sustainability, particularly in light of COVID-19, is no longer simply a matter of climate policy and carbon footprints. Transparency, data governance, engagement, as well as a host of diversity and wellness measurements, are in the ascent, too.
All of this leads us to the belief that what we need when it comes to data privacy is flexibility and focus. We believe that data collected beyond a certain scale and precision must be subject to the continuous review and approval of the individuals from whom that data is collected. At the risk of quoting Leon Trotsky, a “permanent revolution.” This simple statement is laden with intent and open to misinterpretation. Let us break down what we mean.
“Beyond a certain scale,” as we phrased it above, is not just a subordinate clause. At one level, we each have intuition about basic observations of one individual upon another. We are each both collector and subject to data collection — seeing is collecting, acting is providing. Basic. This basic exchange cannot be legislated against without violating both established rights and the boundaries of reason. However, if the unwelcome observations of one individual to another extend beyond the common, in dimension of detail or time, we consider it to be voyeurism and, in some cases, criminal stalking. Our intuition screams, “Creepy!” So, similarly, we should object to a depth of observation of an individual by an organization (government or corporation).
Technology provides an additional aspect of scale for which we likely lack an accurate intuition. Observation in aggregate is really only possible with the advent of information technology. Data aggregators would like you to liken their business to an individual viewing a gathered crowd of people–a situation where a person’s grasp of individual details is eclipsed by the size of the crowd. But with information technology, the data gathered from a million is no less intelligible at a “creepy” level than that focused on a single person. The scrutiny afforded by autonomous sensors, cloud storage, and big data analytics scales the intensity of the organizational gaze like nothing in our prior experience. So an additional dimension of scale to be considered is the sheer volume of individuals under detailed observation.
Privacy should be a human right. But at the same time, the presumption should be that data collection and analysis is allowable by governments and businesses for their stated and actual uses. Presumably those that provide well-defined societal and individual good. The mitigation for the uncomfortable compromise is radical transparency and granular control. Transparency allows the public to hold data collectors and data consumers accountable for being honest about intent. Control allows individuals and like minded groups to modify what can be done with their data in real-time.
Naturally, this assumes some level of trust. It assumes tools that allow for transparency and control. It assumes that people who run companies are not so different than the people who work for them. Transparency must cut both ways. Every marketplace has the potential for bad actors, but with the light of day shining into the affairs of government and corporations alike, I believe that we can strike the balance.
Tim Panagos is Chief Technology Officer at Microshare, Inc., a global data intelligence firm, where his colleague Michael Moran is CMO and runs the Risk & Sustainability practice.