This article originally appeared in Zócalo Public Square.
In the summer of 2020, when most countries were cherishing the quiet before the second peak in COVID-19 cases, the nonprofit I was volunteering at was bustling with activity. It had developed an open-source digital contact tracing system—one of those smartphone apps that tracks one’s whereabouts and sends notifications if the user had been exposed to COVID-19.
“Do you want to join one of our meetings?” was the offer of a volunteer I met at a (virtual) university event. He knew I was researching public use of highly contentious technology. I could not turn down the opportunity to peek into the decision-making process of an organization providing one of those technologies to governments.
Four months later, I was still attending those meetings, and it was becoming clear how inadequately digital data and public health often intersect. Most of the public health officials we were meeting with did not seem prepared to include digital contact tracing apps in their operations, but governments wanted them anyway. So, their contractors were seeking the support of technology providers like the nonprofit to deliver apps that were promising to slow the spread of the infection and still preserve privacy. Even now, one year later, there is limited evidence that those apps can accomplish both objectives. But in the midst of the pandemic, when the spread of the coronavirus seemed uncontrollable, it was easy to be seduced by a pre-packaged solution.
Meanwhile, my own curiosity was peaked by the wealth of information that the non-profit was able to catalyze. Public health officials and technology developers frequently came together to tackle this new challenge by sharing good and bad examples from across the country and around the world. The nonprofit was open to all, but I could only sneak into these conversations as part of its network.
This is how I came to know about the story of Hector Hugo, a 32-year-old urban planner, who used emergency call data to inform the COVID-19 response of Guayaquil, Ecuador. By the end of March 2020, the city of Guayaquil had become the epicenter of the COVID-19 pandemic in Latin America. The internet would show images of dead bodies in the streets of the city waiting to be collected. The political system was slow and unprepared.
Hugo came across the emergency call records of Guayaquil’s residents on the internet. He filtered those that seemed related to COVID-19 infections—calls about having trouble breathing or corpses to be collected—and then coded those as points on a map.
Access to data is typically a major roadblock in research, especially when the privacy of health-related information is in the picture. But apparently, these emergency call records were mistakenly uploaded on the cloud, meaning Hugo could just download them. With that data, he created a heatmap of cases—maps showing where the health crisis was most severe. Then, with the help of a Spanish data analyst, Carlos Bort, he crossed those with demographic data and was able to project the likely spread of the virus in the city. Long story short, Guayaquil had a roadmap to allocate health care workers and resources to the most vulnerable neighborhoods, those with the highest current and projected levels of contagion.
The number of cases and deaths in Guayaquil dropped in the months following the introduction of this new system. Nobody knows whether it was thanks to those heatmaps or to the herd immunity that the city had acquired during the worst moments of the pandemic. Nevertheless, this is how a privacy breach will forever be remembered—as the lucky fluke that saved Guayaquil.
But data hacking is no public health strategy. In early August of 2020, data analysts from a development agency of a Mexican state were looking into digital contact tracing and exposure notification apps for their jurisdiction and reached out to our non-profit to explore feasible options. They wanted to add data analytics to their pandemic response, believing that, like in Guayaquil, it would help allocate public health resources and hoping that digital contact tracing could help.
“Bluetooth-based apps are quickly becoming the standard in this industry because they are highly privacy-preserving,” I explained. “The app collects encrypted identification codes from other app users that happen to be around you. If one of them tests positive to COVID-19 and uploads the test result, the app sends a notification out to those whose codes it had collected.
Ideally, those people get tested and self-isolate to stop the further spread of the virus.”
“Does it work?” asked the inquisitive local official.
“Nobody really knows, there is not much evidence yet,” I replied. “ A study released in April 2020 suggests that to be truly effective, roughly 60 percent of smartphone users need to download it.”
But 60 percent is a high adoption level. Most of the contact tracing and exposure notification apps launched so far don’t get to double digits. The most popular are now in the 20 percent adoption levels. And while the app might still be effective in conjunction with other measures, it cannot aid with those measures, due to individual privacy concerns. That’s because all data the app records remains safely stored on the smartphone, and does not go to a centralized dataset in order to preserve individual privacy.
It’s a Catch-22. Tech companies don’t trust governments with personal information, so no data ever leaves the individual smartphone. But people don’t seem to trust tech companies either, so they don’t download the app in the first place. Governments don’t trust themselves to be able to approach the pandemic without technology as a comfortable safety blanket, so they ask tech companies for apps. It’s a cycle that’s difficult to break.
In an optimistic shot, by summer’s end, most countries had decided that a contact tracing app was going to be in their future despite the uncertainties it carried. While digital contact tracing made sense in theory, there was not enough evidence that it actually slowed the spread of COVID-19. When two users meet, digital contact tracing apps record a contact. But they are ignorant of the context, and therefore cannot accurately predict the risk of infection. If the contact takes place in an open space rather than indoors, if masks were worn, the chances of transmission are lower. Bluetooth signals travel across physical barriers, while the Coronavirus does not. If two phones were close enough, one could receive an exposure notification even if the COVID-19 positive person were on the other side of a wall that would prevent transmission.
It might unnecessarily alarm people, but at the same time might provide a false sense of security. With high rates of asymptomatic infections, apps can track the spread of the virus only if users were carrying their phones consistently with their blue-tooth or GPS systems turned on and if large-scale testing were available for those showing no symptoms. Virtually no country met those conditions at the time, and short of that, asymptomatic carriers might never receive an exposure notification and be led to feel confident about their health when, in fact, they were actively spreading COVID-19.
Whether privacy-preserving apps could consistently and accurately track contagion was still a mystery. Digital contact tracing apps were too novel and, being privacy-preserving, scant data was available to researchers. Countries like China had some success with it, but their solution was part of a large digital surveillance system, which was unrealistic in the privacy-sensitive West. The high levels of adoption required to make digital contact tracing work seemed attainable only with mandates requiring people to download the app—an obligation that many governments judged to be unethical and politically perilous.
The question was then how to nudge people into using the app. Nobody knew, so Belgium decided to ask them directly. In September 2020, it launched an open consultation. Anybody on the internet with enough digital literacy to upload a PDF into an online form could submit a comment on the design of the national exposure notification app and the policies that framed its use—such as what age minors should independently decide on using the app or what privacy statements should look like. It also inquired about the structure and composition of an independent oversight committee that would monitor the use of the technology to ensure it was not abused, that its uses did not impinge on individual rights, and that it did not outlive the health crisis.
It was an innovative example of an open, transparent, and crowdsourced approach to policymaking that acknowledged the risks of misuses and that took steps to mitigate them. Our team had been seeking exactly such arrangements earlier in the pandemic, when a technology contractor in a country with somewhat dictatorial leadership (according to many political commentators) had reached out to receive support in developing and deploying a contact tracing app.
As always, the nonprofit enthusiastically accepted to help. But the project proved to be far more ambitious than a simple app. The contact tracing system would feed into a digital ID that acted as a pass. If one had tested positive or had been in contact with a COVID-19 positive person, the digital ID would deny access to public spaces. Ubiquitous digital readers would scan those passes and track individual movements. Cameras with recognition systems would match the identity of the pass owner to that of the smartphone holder.
It scared me. The technology of the nonprofit was privacy-preserving, but in this country, the whole ecosystem planned around it was not. After such a large investment, it was reasonable to fear that this infrastructure would outlive the pandemic. I could not shake off the picture of a Big Brother following every step with its intrusive eye.
None of us inside the nonprofit were ready to play a part in that process, to be its enablers. But the decision wasn’t easy. Digital contact tracing might have helped mitigate the rising number of cases in the country. If not us, then someone else would have done it, maybe some unscrupulous firm that was not concerned about the public’s well-being. There are no industry standards nor formal monitoring systems for this technology. Maybe the responsible thing to do was to engage in order to keep a foot in and an eye out.
Ultimately, though, the nonprofit did not feel equipped to bear that burden. It was a difficult decision, but one made easier because the financial health of the nonprofit was not at stake. None of the volunteers were risking their livelihood on a lost client. That is a privilege that few institutions enjoy.
For many companies, the COVID-19 health care crisis has been an opportunity: Technological solutions like digital contact tracing and telemedicine suddenly have a market that is both open and, in many places, underregulated. In this context, the nonprofit could have been the unicorn to set standards in an industry where technology providers had no formal obligations other than the judgment of history books. Whether it fulfilled or missed this responsibility still is an open question.
But that window did not stay open long. After a summer of glory, digital contact tracing and exposure notification lost their splendor in the fall, dried up as the days shortened, and fell before the winter was even over. Sustained criticism and lack of adoption had made digital contact tracing irrelevant. But the nonprofit is not losing its spirit. It has already set its eyes on a sexy new gadget: digital vaccine passports.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.