Countries around the world are quickly developing approaches to social, political, and economic changes driven by big data and artificial intelligence. What seems fine in one place is often horrifying in another. China has embraced pervasive facial recognition technology, which gives many Americans the heebie-jeebies. While people in the U.S. are often pretty blasé about handing over other kinds of data for companies to feed machine learning algorithms, Europeans have been more cautious. They’ve embraced a “right to be forgotten” and the General Data Protection Regulation that puts the burden on companies to protect the data of EU citizens, which has already resulted in fines for Google of more than $50 million.
The EU has taken a more skeptical view on the analytics side as well. It’s developed a set of guiding principles for ethical considerations, such as transparency and nondiscrimination, in A.I. research and development. This effort appears to be a first step toward creating regulations for the contexts and depths of A.I. deployment.
Now a new law in France makes it illegal to engage in what is often called “judicial analytics,” which is (roughly) the use of statistics and machine learning to understand or predict judicial behavior. In one recent example of this type of analysis, U.S. researchers gathered court data to empirically study how pretrial detention reduces criminal defendants’ bargaining power in plea negotiations. If similar work was carried out in France, the researchers might find themselves on the other side of their study: Breaking the new law carries a penalty of up to five years in prison.
This legislation is in some ways an extension of the European concerns around privacy, and it also requires that the names of parties to a case be redacted. Supporters of the law say that analytics can still be done without judge information. But engaging in serious analysis or prediction of judicial behavior requires accounting for individual differences among judges, which can be substantial. The ban, and the threat of a stiff criminal penalty, is also likely to exert a broader chilling effect on the field as startups and researchers turn to areas where they don’t have to worry about accidentally carrying out an analysis that might land them in jail.*
In recent years, A.I. has made extraordinary inroads into the practice of law. Recent efforts to digitize legal texts, from federal regulations to courtroom transcripts, have created a nascent global industry in legal analytics. France is attempting to turn off the data spigot by banning the use of public information to “assess, analyze, compare or predict” how judges make decisions. The result is that the French will have less information about how their judicial system works, and people will have access to fewer tools to help them.
On the other end of the spectrum, China recently data-dumped millions of legal texts into the public domain as training data for future A.I.-based automation. The U.S. manages a worst-of-both-worlds approach, making legal filings available but locking them up behind an expensive government paywall.
By some accounts the French fear is an economic one: More public access to data or public access to processed data may reduce the need for lawyers. But there are plenty of other significant reasons to be concerned about the role of big data in law. One is the problem of bias. The human-generated data used to train machine learning algorithms can be easily tainted by racism, sexism, or other biases. Machine predictions will “learn” what a human would do in a similar situation, which—given poorly prepared training data—all too often means a discriminatory result. For example, data analytic tools that are used in bail proceedings have been criticized for assigning higher risk scores to black defendants, which could lead to them being denied bail and spending more time in jail while they await trial.
A.I. tools may also exacerbate wealth inequalities in the legal system. Already, access to legal services is doled out according to ability to pay, with money buying higher-quality representation. A.I. could supercharge this phenomenon, with only the rich able to buy the latest software, while the rest of us are stuck with wetware humans with their limited memory and processing speed. Alternatively, government cutbacks in legal services for the poor might eventually result in over-reliance on subpar A.I. tools, with (possibly) more nimble human lawyers and customized software reserved for the well-heeled.
But a ban is a misguided approach that throws out the good with the bad. First of all, it may be illegal: Both the French Constitution and the European Convention on Human Rights protect freedom of speech, which this law restricts. From a practical perspective, A.I. tools can help expand access to courts and legal advice, which too often are luxury goods open only to the well-off. Already, companies are springing up to help courts create online platforms that do not require people to take time off of work or travel long distances to address legal problems. Machine learning has substantial potential to help streamline and improve government decision-making in a range of contexts, especially where similar matters must be resolved in large volumes.
Banning analysis of judicial opinions doesn’t just prevent lawyer bots or hamstring legal tech innovation. It also bars research. Judges might prefer to be insulated from the kind of outside scrutiny that, for example, has found shocking amounts of arbitrariness in U.S. immigration proceedings, but the public and policymakers deserve access to this information. The digitization of legal texts has been a boon for researchers who study courts and other legal institutions. In our recent book Law as Data, we look at how researchers are using this data to study everything from the U.S. Supreme Court and California parole boards to state lawmaking and the structure of European statutory law. If we had included a chapter on French courts, the book might now be banned in that country, an absurd effect that would chill research and degrade the quality of public discourse about legal decision-making.
Banning statistical analysis of legal opinions is silly and perhaps futile, but that doesn’t mean that there is nothing that can be done to anticipate and address the risks posed by a more technologically driven legal system. What we need is more accessible and transparent legal data and the right policies and incentives. The United States could take a leading role by opening up access to its PACER database of legal documents, which is currently behind a paywall. This data would give researchers and tech companies a treasure trove of data that could be used to both study the legal system and develop new technologies to improve access to justice. Other legal materials, such as the decisions of parole boards or disability benefits adjudications, could similarly be opened up. Courts at every level should be more aggressive about using technology to reduce the burdens of the legal system, especially on poor people. At the same time, decision-makers need to be educated in the risks that accompany technology and data analytics and should hold any technologies to a high standard before they are deployed. Funding for research that specifically ferrets out and addresses bias and discrimination in machine-enhanced legal systems should also be a priority.
Full-on lawyer bots may be a long way off, but other transformations are already underway as tech companies and researchers grasp hold of new data and analytic techniques to inform and improve the legal system. France’s attempt to ban analysis will not turn back this tide, although it might hamper progress locally. But that does not mean that policymakers can take a laissez faire approach. Instead, they must create open and transparent channels of information while also closely attending to both the benefits and the risks of any new technologies that are incorporated into the legal system. Access to justice remains one of the most important commitments that a society makes to its people. New technologies can help us keep that promise, but only if we keep a watchful eye on the machine.
Update, June 25, 2019: This article was updated to clarify that the legislation specifically bans the naming of individual judges.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.