You Shouldn’t Have to Give Google Your Data to Access a COVID-19 Test

Americans should be skeptical of tech company involvement.

A doctor wearing a mask stands at a car parked by a plastic tent. Inside the tent, other doctors in masks stand in front of a shelf.
Medical personnel take a sample from a person at a drive-thru COVID-19 testing station at a Kaiser Permanente facility on Thursday in San Francisco.
Justin Sullivan/Getty Images

On Friday, President Donald Trump declared the COVID-19 pandemic a national emergency. He then announced a surprising public-private partnership to make testing widely available. Federal agencies will collaborate with Target, Walmart, CVS, and Walgreens to offer drive-thru coronavirus testing in store parking lots.

Trump’s next announcement was even more unexpected: Google would offer nationwide COVID-19 screening through an online portal that records people’s symptoms, triages them to determine who requires drive-thru testing, and displays test results once they are available.

Immediately after the press conference, Google clarified the president’s characterization of the website. Its sister company Verily, the health and life sciences division of parent company Alphabet, developed the portal, which went live Sunday evening. It would initially be available only for the San Francisco Bay Area. But Verily said that after piloting the program regionally, it plans to offer the portal throughout California and potentially nationwide. Some people who visited the site on Sunday and were given sufficiently high COVID-19 risk scores received appointments and were tested on Monday at one of two San Francisco Bay Area test centers, currently located in Santa Clara and San Mateo counties. Others were turned away when the available slots were filled, and a waiting list was formed. According to Verily, it will call people in the queue when new appointments become available.

But early confusion over the website’s scope is not the only concern. Visitors must sign in with an existing Google account or create a new one, which requires providing a phone number. Once admitted, users are asked about their age, gender, travel history, health status, and contact with people who may have been exposed to the novel coronavirus. Based on those answers, the portal calculates a COVID-19 risk score.

Though accessible testing for the novel coronavirus is essential to slowing its spread, Americans should be suspicious of tech company involvement. When combined with drive-thru testing, Verily’s portal may constitute the largest acquisition of U.S. health data by private companies to date. It is part of a broader trend in which public health functions, typically performed by government agencies, are increasingly shifted to corporations. Facebook’s use of artificial intelligence to calculate suicide risk is another example. Developed with the Department of Veterans Affairs, the technology is now used primarily by Facebook and other private companies. (I discuss this and other examples in a new article, “Emergent Medical Data: Health Information Inferred by Artificial Intelligence,” that is forthcoming in the UC–Irvine Law Review.)

The privatization and automation of public health is concerning because it disrupts well-established flows of health data and circumvents legal safeguards around privacy. Through their public health projects, Facebook and Verily now serve as intermediaries between people and government agencies. Facebook mediates the flow of mental health information between its users and emergency responders, and now Verily controls the flow of COVID-19 screening data between some Americans and public health officials.

As COVID-19 spreads through our communities, stories abound of people exploiting the situation for financial gain. Unscrupulous pandemic profiteers horde protective gear and cleaning supplies to sell at exorbitant prices. Like these commodities, data siphoned by tech companies from COVID-19 screening may be the next asset to be hoarded and exploited.

Neither Google nor Verily is a health care provider. They owe users none of the duties that doctors owe to patients, and federal privacy laws, such as the Health Insurance Portability and Accountability Act, do not apply. That’s a serious problem because companies can exploit people’s data in surreptitious ways that violate their expectations and jeopardize their rights. Data might be shared with advertisers, sold to insurance companies, or used to calculate consumer credit scores that control access to resources.

In China, such scores determine whether people can have specific jobs, live in certain apartments, or access transportation. One of the best-known scores, called Sesame Credit, is provided by Alibaba through its smartphone payment app Alipay. Because of the COVID-19 outbreak, Chinese officials now reportedly require people to download an app called Alipay Health Code, which assigns them a color code (green, yellow, or red) that reflects their health status. A green code means they are healthy and free to travel, but yellow or red codes require them to report to authorities, and people with the latter codes may be barred from travel or even quarantined. Because Alipay and the Chinese government have not explained how the app classifies people, users can be quarantined without understanding why. While it may be extreme, this example illustrates how loss of autonomy, predictability, and accountability is part of the risk of automating public health.

Verily and Google declined to respond to Slate’s questions about the portal and how it uses people’s data. Though less coercive than Alipay’s app—its use is optional, for one thing—the portal is equally mysterious. Today it controls access to testing in and around Silicon Valley, but the pandemic is rapidly evolving. After expanding to other regions, the portal could be adapted to track and control individuals who undergo testing and receive positive results.

There are other risks. During the AIDs epidemic of the 1980s, HIV-positive individuals were heavily stigmatized. It took decades of dedicated advocacy to reduce that stigma, and it still persists today. Even when the COVID-19 pandemic subsides, individuals may be stigmatized by peers and co-workers. Some groups, such as Asian Americans, seniors, and health care workers, have already been subjected to pandemic-related harassment. Minimizing the unnecessary dissemination of COVID-19 data puts people in control of their health information and reduces the risk of discrimination.

Should we be willing to give up some privacy for the sake of public health? Of course. It is expected that some liberties may be curtailed during national emergencies (for instance, when the government implements a curfew or quarantine). But we cannot allow companies to use our urgent need for testing, and the distraction provided by the pandemic, to extract more data than is necessary to promote public health. Unless we act now, such systems will become normalized. It is urgent that we set clear limitations on how Google, Verily, and participating retailers can use COVID-19 screening data.

Verily’s portal collects people’s demographics, medical histories, and current symptoms to determine whether they require testing. At this stage, Google may harvest the data to supplement its growing cache of health information obtained through partnerships with dozens of hospitals and health care systems. It may also collect data on people’s devices, locations, IP addresses, and other variables. Google could analyze that information to learn more about portal users and share insights with corporate partners such as Salesforce, a provider of business management software, and companies including DeepMind, a world leader in A.I. and data analytics.

If the screening portal directs people to undergo COVID-19 testing, they will visit a drive-thru location where their mouths and noses will be swabbed. Like Google and Facebook, the retail testing partners listed by Trump routinely collect data from consumers to profile them and influence their behavior.

According to Walmart’s privacy policy, the company collects biometric information “such as imagery of the iris, retina, and fingerprints,” geolocation information, audiovisual, and “other sensory information.” Some Walmart locations use devices called “Lot Cops” to monitor parking areas. Manufactured by LiveView Technologies, they provide a 360-degree bird’s-eye view of the area. LiveView recently partnered with Rekor Systems to integrate license plate reading technology into its Lot Cop units. People visiting drive-thru testing sites may unwittingly be tracked by these technologies.

Walgreens also deploys invasive surveillance technologies. According to its privacy policy, the company uses Bluetooth beacons, Wi-Fi, and other technologies to track people’s movements. In 2019, it tested in-store “smart coolers” that scan people’s faces to infer their age and gender while tracking their eyes to see which products attract their gaze.

Target uses Bluetooth and other technology to track people’s movements, and it previously tested in-store biometrics including facial recognition. In 2012, the retailer made national news by predicting which customers were pregnant based on their shopping habits. It sent coupons for pregnancy and newborn-related products to customers’ homes.

Because these companies scan people’s bodies, observe their behavior, and use A.I. to infer information about their age, gender, sexuality, and health, collecting pandemic-related data is in their wheelhouse.

Representatives of Walmart, Walgreens, and Target declined to comment on how they will protect people’s privacy during COVID-19 testing. However, CVS representative Joseph Goode said his company will not collect data from individuals who visit its drive-thrus. When asked what CVS will do to actively protect people’s privacy, Goode said, “The [Trump] administration is working on those details.”

I offer the following recommendations. The companies facilitating COVID-19 testing must make their data use practices completely transparent. They should be developing safe and reliable public health infrastructure, not proprietary trade secrets.

Verily should not require people to log in to a Google account or provide a phone number in order to be screened. This information is not essential for testing.

The screening portal should not collect device, location, or IP address data, and information collected must solely be used for screening and only be shared with public health officials. Under no circumstances should it be joined with data collected by Google’s many other services or shared with business partners and sister companies.

Retailers offering drive-thru testing should suspend electronic surveillance including video, Bluetooth, Wi-Fi, and other technologies.

In response to these suggestions, some might say it’s rude to look a gift horse in the mouth. Google and Verily are volunteering time and resources to combat a pandemic. But it is not too much to ask for a government-run system. Canada’s Alberta Health Services offers a screening portal that requires no login or personal information.

In general, private companies should not perform functions that are best reserved for scientists and public health agencies. Since we are in dire need of COVID-19 screening, perhaps Google and Verily can play a role in providing it, but the loss of our privacy should not be the cost.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.