Researchers from University College London have developed an artificial intelligence algorithm that predicted the outcome of cases that came before the European Court of Human Rights with 79 percent accuracy.
Let’s not get ahead of ourselves: Though the researchers say the process could help speed up the review of cases submitted to the court, according to UCL’s website, lead researcher Nikolaos Aletras said the team doesn’t “see AI replacing judges or lawyers.” But it’s nevertheless fascinating research.
By inputting the information from 584 human rights cases—facts, circumstances, laws, topics, details, language, and rulings—into the computer program, the researchers “taught” it which complaints were found to be violations of human rights and which were not. Then, the researchers presented the algorithm with different human rights cases and asked it to guess judges’ rulings. When asked to rule on each part of the decision, the AI was correct about 73 percent of the time. But when the algorithm looked at the topic and circumstances of a case together, the researchers found the predictive accuracy was higher—79 percent.
Researchers say the increase in accuracy when the program takes the topic and circumstances of cases into account is telling. The UCL website said the researchers found the court’s judgments to be “highly correlated to non-legal facts rather than directly legal arguments, suggesting that judges of the Court are, in the jargon of legal theory, ‘realists’ rather than ‘formalists.’ ” That is, the consideration real-life facts of the case may carry more weight in judges’ decision-making than the letter of the law. While this finding seems to bolster the need for human judgement in deciding these cases, another of the researchers on the team, Vasileios Lampos, told Motherboard that the AI could still be useful.
“The court has a huge queue of cases that have not been processed and it’s quite easy to say if some of them have a high probability of violation, and others have a low probability of violation,” he said. “If a tool could discriminate between the classes and prioritize the cases with a high probability, then those people will get justice sooner.”