Future Tense

Maine Now Has the Toughest Facial Recognition Restrictions in the U.S.

Faces in a crowd are overlaid with computer data from facial recognition technology.
A live demonstration uses artificial intelligence and facial recognition in dense crowd at the Horizon Robotics exhibit in Las Vegas on Jan. 10, 2019. David McNew/Getty Images

Maine has just passed the nation’s toughest law restricting the use of facial recognition technology.

LD 1585 was unanimously approved by the Maine House and Senate on June 16 and 17, respectively, and became law without the signature of Gov. Janet Mills. The bill’s sponsor, Rep. Grayson Lookner, D-Portland, hopes that Maine’s new law—which goes into effect Oct. 1—will “provide an example to other states that want to rein in the government’s ability to use facial recognition and other invasive biometric technologies.”

Advertisement

The country’s only other statewide law regulating facial recognition was passed in Washington in 2020, and it authorized state police to use facial recognition technology for “mass surveillance of people’s public movements, habits, and associations.” The Washington law—written by state Sen. and Microsoft employee Joe Nguyen— was opposed by the ACLU.

Advertisement
Advertisement
Advertisement

In sharp contrast, the ACLU championed the Maine bill as a victory for privacy rights and civil liberties: “Maine is showing the rest of the country what it looks like when we the people are in control of our civil rights and civil liberties, not tech companies that stand to profit from widespread government use of face surveillance technology,” Michael Kebede, policy counsel at the ACLU of Maine, said in a press release.

Advertisement

Other states have also passed legislation to partially regulate the use of facial recognition as a surveillance tool. For instance, Virginia requires law enforcement to obtain authorization for the use of the technology from the state legislature, while Massachusetts mandates authorization from a court.

Maine’s legislation goes one—very significant—step further by prohibiting use of the technology across all levels of state, county, and municipal government. The law has strict parameters for the limited exceptions made for law enforcement purposes, and ensures that workarounds—like Washington police’s secret request networks—aren’t permitted.

The law states that law enforcement can use facial recognition technology only with “probable cause to believe an unidentified person in an image committed a serious crime,” or when seeking to identify a deceased or missing person. It stipulates that facial recognition data alone cannot establish probable cause for an arrest, and that the Maine State Police and Bureau of Motor Vehicles must maintain de-identified records of every requested and performed search, which will be designated as public records. Facial recognition data that is obtained in violation of the law will be inadmissible as evidence. The bill also provides a pathway for anyone wishing to bring legal action against the state if they believe that the technology was used in violation of the law.

Advertisement
Advertisement

Maine’s law stands alone in the limitations it places on law enforcement, especially when compared to the federal government’s regulation on facial recognition (or, more accurately, their lack of regulation). A report released on Tuesday from the Government Accountability Office found that 14 out of 42 federal agencies surveyed had used privately built facial recognition for criminal investigations. Despite the widespread use of the technology, only one of the 14 agencies had any awareness of which systems their employees were using in criminal investigations. The GAO report also found that six agencies had used facial recognition technology to help identify people suspected of violating the law during protests following the murder of George Floyd in May 2020, and three agencies reported using the technology on images from the Jan. 6 U.S. Capitol attack. This report raises concerns over federal agencies’ unawareness of how exactly their employees are using outside systems for facial recognition technologies and highlights a stark lack of privacy protections, as well as calling attention to the government’s apparent inability to assess the accuracy of this technology.

Advertisement
Advertisement

The GAO’s recommendations to federal agencies are two-fold: start tracking what non-federal systems employees are using, and then figure out their risks. For Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project (S.T.O.P), the recommendations are insufficient in addressing the threat posed by government use of facial recognition technology: “It’s alarming that six federal agencies targeted facial recognition at BLM protesters. In a democracy, police should not be allowed to use surveillance to punish dissent.”* S.T.O.P—citing the need for a ban of the technology rather than simply regulation—is calling on President Biden to issue a moratorium on federal facial recognition.

Advertisement
Advertisement

A central worry for opponents of facial recognition technology is its racial bias. Despite boasting a classification accuracy of over 90 percent, a Harvard study notes that “these outcomes are not universal.” The technology has been found to have the poorest accuracy when analyzing images of people who are Black, female, and 18 to 30 years old. Another study performed by the National Institute of Standards and Technology found that the highest error rates came in identifying Native Americans, while Black and Asian faces were falsely identified 10-100 times more frequently than White faces.

Advertisement

The dangers of a racially-biased technology being used by law enforcement don’t exist only in the theoretical realm of scientific studies: They’ve manifested in the wrongful arrests of at least three Black men.

In June 2020, Amazon, Microsoft, and IBM announced that they would halt sales of facial recognition to the federal government until Congress passed legislation to regulate the technology. In May, Amazon extended what was originally a one-year moratorium indefinitely, drawing attention to the fact that, despite a year of pressure, no federal legislative action has been taken to properly regulate a technology that has already fulfilled its promises to infringe upon civil liberties and privacy.

Advertisement

The threat posed by facial recognition likely won’t be solved solely by companies like Amazon calling for federal regulation, or by more diverse data sets—which are built in part from the nonconsenting faces of victims of child pornography, U.S. visa applicants, and the deceased. What’s needed is more comprehensive legislation that bans—rather than lightly regulates—this technology. Policies written by people “who understand the risks that marginalized populations in the U.S. face” from the technology are a good step toward achieving such bans, as Os Keyes, Nikki Stevens, and Jacqueline Wernimont wrote for Slate in 2019.So yes, let’s celebrate Maine’s new law: It reins in law enforcement and allows citizens to sue if they’ve been wronged by facial recognition technology and the people who wield it. But the law comes alongside the terrifying findings of the GAO report. The GAO’s previous recommendations urging U.S. Customs and Border Protection to ensure accuracy and establish privacy protections for travelers’ data have gone unimplemented for almost a year—will the latest report’s recommendations suffer the same stagnant fate? As we wait—perhaps indefinitely—for federal action, other states should follow in Maine’s footsteps and place stricter limitations on their own government’s use of facial recognition technology.

Correction, July 2, 2021: This piece has been corrected to include the full name of Albert Fox Cahn. 

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement