Future Tense

The EU’s New Proposed Rules on A.I. Are Missing Something

Against a blue background with yellow stars, a robot hand reaches for a stack of papers.
Photo by Possessed Photography on Unsplash and Martin Barraud/iStock/Getty Images Plus. 

Thus far, most attempts at making policy for artificial intelligence have fallen into one of two camps: either outright bans on certain applications of machine learning—for instance, the facial recognitions bans passed in a few cities in the United States—or very broad, high-level principles that offer no concrete guidance or specific rules, like the “Ethical Principles for Artificial Intelligence” that the Department of Defense adopted in 2020 for developing and implementing A.I. in a responsible, equitable, traceable, reliable, and governable manner. There are obvious drawbacks to both of these approaches: The former seems neither sustainable nor scalable, given the pace with which machine learning is advancing and the extent to which both public and private entities seem eager to adopt it, while the latter often amounts to little more than window-dressing and vague reassurances that policymakers are at least thinking about the big questions posed by automated decision-making.

Advertisement
Advertisement
Advertisement
Advertisement

That’s why it was such a big deal when the European Commission released its proposed rules for artificial intelligence last week. While the rules are not yet final, the 108-page document “laying down harmonised rules on artificial intelligence” is certainly the closest any regulatory body has come to trying to develop more detailed and nuanced rules for A.I. than just banning certain uses or promise to implement these systems in generally ethical ways. European Commission executive vice president Margrethe Vestager hailed the new document as a set of “landmark rules” and others have since referred to them as “strict” or “ambitious,” or an effort by the EU to become a “super-regulator.”

Ambitious certainly, but it’s not clear how strict the new rules actually are or whether they will, in fact, position the EU to be the dominant voice in A.I. regulation worldwide. For all the detail in the EU draft rules, there’s still so much left undefined and unclear that it’s hard to feel they amount to much more than the Department of Defense’s broad principles. To be sure, the EU drills down into more specifics of what that might look like than the DOD has, but ultimately, still does not get very far towards defining what the proposed mechanisms for oversight and auditing will look like or how they will work.

Advertisement
Advertisement
Advertisement

The EU rules divide A.I. systems into a set of applications that are prohibited, a set that are designated as high-risk and must adhere to certain requirements, and a low-risk category of A.I. that is subject to less stringent oversight and regulation. So the A.I. system that drives an autonomous vehicle or analyzes a medical X-ray image might be high-risk—because any mistake or malfunction in that technology could lead to serious harm or injury—but the A.I. that filters spam messages out of my inbox or that translates websites for me from one language into another might be classified as low-risk because any error would be unlikely to have very high stakes. But the lines between these divisions are still quite blurry, as are the mechanisms intended to manage the high-risk A.I. systems.

Advertisement
Advertisement
Advertisement

The only truly concrete prohibition in the rules is a ban on “the use of ‘real time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement”—in other words, real-time facial recognition analysis in public places—which would be forbidden “unless certain limited exceptions apply.” Beyond that, the proposed rules prohibit A.I. systems that violate “fundamental rights” or that have “significant potential to manipulate persons through subliminal techniques” or that “exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities.” But it’s not at all clear what kinds of A.I. these categories refer to. Is a machine learning algorithm that nudges me to shift back into my lane on the road manipulating me through subliminal techniques? What about autocompleting the sentence I’ve begun typing, or suggesting that I had something else to my online shopping cart? Slightly more specific is the proposed prohibition on “AI-based social scoring for general purposes done by public authorities.” The rules define social scoring as “evaluation or classification of the trustworthiness” of people “based on their social behaviour or known or predicted personal or personality characteristics,” though it’s not clear what would constitute a general purpose in this context. Would deciding whether to release someone bail or determining a prison sentence or a person’s eligibility for social welfare programs be a specific or a general purpose?

Advertisement
Advertisement
Advertisement
Advertisement

The real complexity and uncertainty comes in its proposed guidelines for high-risk A.I. systems. First, the very process of designating which A.I. systems are high-risk is complicated, and the EU rules take the approach of defining many application areas that could, potentially, fall under this designation depending on the risks they might pose. A lengthy annex to the proposed rules lists several application areas that have been deemed high-risk, including biometric identification, management and operation of critical infrastructure, educational assessment and admissions testing, recruiting and hiring systems for screening job applications as well as monitoring workers for promotion and performance evaluation, assessing people’s creditworthiness and eligibility for public assistance benefits, prioritizing first responder dispatchers in emergency situations, assessing whether people are likely to commit crimes, identifying deep fakes, verifying travel documents, issuing travel visas and residence permits, and assisting courts in interpreting and applying the law. This is an extensive list (and that’s only part of it) of applications areas that are deemed high-risk where they could “pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights.”

Advertisement
Advertisement

That means the guidance on high-risk A.I. systems could potentially apply to an enormous number of organizations that develop algorithms in these areas, but what exactly they would have to do to comply with these rules remains quite unclear. According to the proposal, they would be required to establish a risk management system with “regular systematic updating” to assess possible risks in their technology and adopt “suitable risk management measures.” These systems will also be required to be tested against “preliminarily defined metrics and probabilistic thresholds that are appropriate to [their] intended purposes” and developed using data sets that are “relevant, representative, free of errors and complete.” A.I. developers will be required to conduct conformity assessments to make sure their products meet these standards and also to produce technical documentation explaining a variety of details about the high-risk systems, including their purpose, how they were developed, designed, and the validation and testing procedures.

Advertisement
Advertisement

High-risk A.I. systems will also need to be designed so “that they can be effectively overseen by natural persons during the period in which the AI system is in use,” but what that will look like remains to be seen. Beyond providing people with the means of stopping the operation of an A.I. algorithm or deciding to disregard its output, the rules further specify that the oversight process should enable people to “fully understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible.” But it’s hard to imagine how that would work for complicated machine learning tools built on enormous training data sets.

Advertisement

The proposed EU rules are hugely more ambitious than any previous A.I. regulation but, at the same time, they still offer very little concrete guidance to technology designers because all the challenge and significance of those rules lies in their implementation and details. What would it mean for a human operator to be able to “fully understand” a high-risk A.I. system? What will conformity assessments look like and who will perform them? What does it mean for a data set to be representative, complete, and error-free?

Advertisement
Advertisement
Advertisement
Advertisement

Applications of machine learning are so varied and, in some cases, so complicated that it’s understandably difficult for the EU—or anyone—to define a uniform set of specific rules that apply to all of them. Perhaps this first, general set of rules will eventually be specified in a way that recognizes the nuance, the diversity, and the technical sophistication of different artificial intelligence systems but the EU is still a long way from reaching that end goal. The new rules are a first step, but nowhere near as clear or concrete as they will need to be to have a meaningful impact. Next, the proposed rules will need to be adopted by the European Parliament and EU member states—a legislative process that, with luck, will give regulators some much needed time to figure out how to implement these guidelines and what they really mean.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement