All it took was public outrage, a widespread campaign, and political condemnation for the IRS to reverse its plans to require facial recognition for access to certain online services. In abandoning its intention to require tax-payers to upload images of their government-issued IDs and video selfies to controversial third-party company ID.me, the IRS has acknowledged that Americans shouldn’t have to sacrifice their privacy for security.
But the controversy around ID.me has somewhat eclipsed the broader and more concerning context of biometric identification technologies. Coverage of the IRS’s announcement has in many cases not addressed the fact that millions of less advantaged individuals in the United States have already been forced to have their faces scanned by ID.me to access government services. ID.me has contracts with 10 federal agencies and has been verifying identities for the IRS’s Child Tax Credit Update Portal since last year. California’s Employment Development Department was the first to introduce ID.me facial verification services in November 2020, and over half of states have contracted with the company to verify unemployed applicants’ identities.
While the IRS has made a show of acknowledging the data protection concerns with using ID.me, biometric verification services also generate serious, life-threatening harms that go far beyond privacy. This is true of ID.me itself, whose biometrics-based verification services have already been impeding access to services. Many people have spent weeks trying to access unemployment benefits due to difficulties with the technology; some have been left without unemployment support entirely. Yet as the pandemic accelerated migration of critical government services online, business has been booming at ID.me. And, though the recent controversy has forced the company to remove the requirement for video selfies, many have found the offline alternatives to be just as burdensome. Even without facial verification, the system still risks generating exclusion by becoming digital by default; forcing people in need to choose between what could be a week-long struggle to get an in-person meeting versus the quick forfeiture of their personal information.
For decades, biometric technologies have been recklessly deployed by many states and federal agencies. States such as Connecticut, California, and New York began collecting welfare recipients’ fingerprints as early as the 1990s. The Department of Homeland Security has been deploying biometric technologies on immigrant populations since the agency’s inception in the early 2000s, and it has been quietly building one of the largest biometric databases in the world for several years.
The Biden administration has now finally acknowledged the serious rights impacts of this unfettered, unregulated usage. In January, we responded as part of a group of human rights experts to the White House Office of Science and Technology Policy’s Request for Information on biometric technologies, urging the federal government to impose an immediate moratorium on the linking of biometric verification to critical government services, including welfare payments.
This is hardly an American problem. Poor and marginalized communities around the world have already been subjected to biometric verification as a condition to accessing government services, and have grappled with the immensely harmful consequences. Our work on NYU Law’s Digital Welfare State and Human Rights Project shows that inevitable biometric failures can lead to the loss of life-saving food subsidies, health care, and welfare benefits. In Ireland, pensioners who failed to submit to a facial recognition process reported that their pension payments were abruptly stopped. In Uganda, hundreds of thousands of elderly persons have been left unable to access healthcare or cash transfers, in part because card readers cannot scan their fingerprints. This biometric exclusion can become life-threatening—in India, numerous people reportedly starved to death after biometric failures led to their food rations being cut off.
While the IRS’s facial recognition plans threatened to affect a large swath of Americans, it is marginalized groups who disproportionately bear the burden of biometric harms. People of color are consistently more likely to be misidentified by facial recognition technology. In the United Kingdom, legal action has been launched against Uber after its facial verification technology locked many Black drivers out of their accounts, resulting in loss of work and income (especially concerning since the company’s workforce is disproportionately comprised of people of color). But race is only one axis of biometric exclusion. People with disabilities are more likely to struggle to verify their identity through biometric systems, which rigidly exclude those falling outside of predetermined physical norms. And for people living in poverty, who often do not have access to smartphones, or even regular electricity and internet, it becomes almost impossible to navigate remote biometric identification systems. This leaves many excluded from government services, denied access to fundamental rights, as the digital divide maps onto other disadvantage, including the rural/urban divide, race, disability, age, and class.
None of this is coincidental. Our work shows that these groups are consistently targeted for biometric experimentation because they are all too often seen as having fewer rights and less political power. Biometric tools are often deployed first within welfare programs before being introduced for other services: In South Africa, fingerprint verification for welfare benefits was in widespread use as early as the 1990s, and in India, food rations for the poor were one of the first programs to require biometric authentication. Welfare benefit claimants in the United Kingdom were forced to use a complex digital identity verification system while their wealthier compatriots could file online taxes using a simple password system. These intrusive, exclusionary technological interventions targeted towards people living in poverty are, time and again, justified by the exaggerated specter of welfare fraud. We saw this in the IRS debacle: ID.me’s CEO made dubious claims that $400 billion in pandemic unemployment payments were stolen, to exploit anxieties about welfare fraud and hawk his products. Similar fearmongering echoes around the world. The IRS use of ID.me simply brought this experimentation to the masses, giving all American taxpayers a taste of what it might be like to live under the capricious, discomforting authority of biometric systems.
It is welcome news that the IRS will abandon its plans. But, in and of itself, this is not cause for celebration. International experience shows that this haphazard, Whac-A-Mole approach to oversight is doomed to fail. Other federal agencies and states will continue to pour millions of dollars into contracts with private vendors and foist increasingly invasive biometric verification on public service users. ID.me may now be disqualified due to its track record of deception, but other companies stand ready to step in, including tech titans like Google and Apple, who have already entered this lucrative field.
This is exactly why comprehensive federal legislation and regulation of biometric identification is vital. The United States already lags behind the European Union, in part because of fears that regulation will cripple America’s ability to “win the AI competition” with China. The ID.me controversy shows that is not enough to rely on public backlash as a policing mechanism, as biometric verification has already been allowed to spiral out of control and threaten to impede access to critical, life-saving resources. The U.S. government must, therefore, take this controversy as an opportunity to learn from the many years of biometric failures, at home and abroad, and take decisive action to regulate the use of biometrics in government services.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.