Future Tense

Arizona Now Has a Task Force Focused on Countering Disinformation

It’s an admirable goal. But there may be First Amendment implications.

Collage of the Arizona state flag and a gavel.
Photo illustration by Slate. Photos by Nastco/iStock/Getty Images Plus and Popartic/iStock/Getty Images Plus.

On a rainy Tuesday afternoon in November, members of the newly formed Arizona Task Force on Countering Disinformation met at a conference room in downtown Phoenix’s Arizona State Courts Building to discuss ways to counter disinformation directed at the state’s judicial system.

The task force, which includes lawyers, judges, academics, and public information officers, is believed to be the first such entity in the country. Currently it’s holding public and private meetings on the subject, and by October 2020 it will prepare a report with final recommendations. They could include finding ways to identify and counter disinformation, creating public education campaigns, preparing responses to backlash that might follow controversial cases, finding technology to assist in identifying disinformation campaigns early enough to mount a response, determining a way to ramp up the court’s ability to send correct information to a larger audience, and coordinating with influencers who would agree to post accurate information to their followers in a crisis situation. But also in play is the idea of getting platforms to remove what the court considers disinformation—which has the potential to violate the First Amendment.

Advertisement
Advertisement
Advertisement
Advertisement

The task force, which launched in mid-September, has its roots in the National Center for State Courts, a nonprofit organization headquartered in Virginia that monitors issues affecting judicial administration around the country. The center has warned about courts becoming targets of disinformation, pointing to attempts by Russia and other nations to undermine the appeal of democracy and to weaken the West.

After hearing about this risk, Dave Byers, director of the Arizona Administrative Office of the Courts, shared with the bar a PowerPoint presentation titled “From Russia With Love: Countering Disinformation and Attacks on America’s Institutions.” Byers also worked with Chief Justice Robert Brutinel to create the task force.*

Although Nash said that the Arizona judicial system had not yet been targeted with disinformation campaigns, the goal is to be prepared if it does happen. The possibility isn’t unheard-of. “Beyond the Ballot: How the Kremlin Works to Undermine the U.S. Justice System,” a report created for the Center for Strategic & International Studies, focused on Russia’s disinformation operations, some of which targeted democratic institutions of justice. For example, Kate Steinle was shot by an undocumented immigrant in San Francisco, a sanctuary city, and California jury determined that the shooting was an accident. Russian-linked Twitter accounts responded with anti-immigrant tweets and pointed to the “corruption of the criminal justice system, with jurors kept ignorant by activist judges and complicit liberal lawyers.” And in early 2017, Russian trolls pushed out tweets portraying both federal courts and James Robart, a district court judge in the Western District of Washington, as obstructionist after Robart issued a preliminary injunction on Trump’s refugee ban.

Advertisement
Advertisement
Advertisement

At November’s disinformation task force meeting, members examined those threats, starting with Nash presenting and narrating Byers’ PowerPoint. The presentation explained the importance of public trust and confidence in the courts—critical, it stated, to ensuring compliance with court orders. “Controversy can blow up any court case,” the presentation warned, regardless of its subject matter. Examples of conspiracy theories that could affect the courts included messages that the justice system “tolerates, protects and covers up crimes committed by immigrants,” “operationalizes the institutionally racist and corrupt police state,” “supports and enables corporate corruption,” or is “a tool of the political elite.”

Advertisement
Advertisement

The presentation noted that disinformation is often presented with enough credibility to be picked up by influencers and incorporated into legitimate news sites. But Nash seemed particularly concerned with deepfakes—very convincing fake images, audio, and video created with the help of artificial intelligence. The presentation warned that it will be nearly impossible to tell deepfakes from the real thing by 2021—but Nash said it would be even sooner than that. The presentations pointed to a doctored video of Nancy Pelosi and a video that purports to be about the Department of Justice investigating the State Bar of Arizona. Concerns include not just altered or redistributed versions of real documents being widely distributed, but also deepfakes with false or misleading information on sentencings, court opinions, reports, or findings.

Advertisement
Advertisement
Advertisement

The task force has a clear idea, then, of the challenges it faces. But what exactly to do about them is much harder (and is exactly what members plan to do in the coming months). For instance, in its paperwork, the task force says it will look into taking down information “while respecting individual opinions and First Amendment rights.” Nash admitted that getting content taken down might be unworkable, especially if the content could be identified as misinformation (information that’s incorrect, but not deliberately so) or opinion. “If it’s a call between the First Amendment and something else, the First Amendment’s going to win,” he told me. However, he said, there may be blatant disinformation that he believes may need to come down. When asked how potential takedowns of content might be enforced, Nash wasn’t quite sure. “If it ended up being one of our recommendations, that would be something along the way we would find out,” he said. “If it’s impractical, we would just note that in our report.”

Advertisement
Advertisement
Advertisement

Though it is all hypothetical at this point, I asked a couple of experts about possible issues the task force should be thinking about. “The natural inclination of a First Amendment lawyer is to be concerned … because it’s easy for disinformation policing to be a pretext for anything we don’t like that’s said about us,” said Neil Richards, professor of law at Washington University in St. Louis. But as long as organizations working to protect the integrity of operations of governments against intentional interference are not used to chill legitimate dissent, he said, this is a step in the right direction.

Advertisement

Dipayan Ghosh, co-director of the Platform Accountability Project at the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School, isn’t a law professor, but he is a former Facebook employee who thinks a lot about how to tackle disinformation. He was impressed with the project: “I think we need to see more of this, honestly. To the extent that state authorities can stunt the effects of disinformation for their constituents, I think that’s wonderful. And it’s refreshing to see that the task force has considered the First Amendment concerns and is going to be respectful to them.”

Advertisement

Whatever the final recommendations of the task force might be, they all need to answer the question of what disinformation is, exactly. During the meeting, Arizona Supreme Court staff attorney Patience Huntwork presented a working definition: “false, inaccurate or misleading information that is deliberately spread to the public with the intent to undermine democratic process, sow discord, profit financially, or create distrust of government institutions.” And, she added, “disinformation should not be confused with misinformation, which is false information shared by those who do not recognize it as such, or with legitimate criticism, protest, or censure of government actions, institutions, or processes.”

Advertisement
Advertisement
Advertisement
Advertisement

There was some discussion over the term “profit financially,” since some may simply profit politically, and discussion over adding “influence public opinion.” One member, data security and privacy attorney Fredric Bellamy, said he’d put Sean Hannity in the category of disinformation (something Huntwork did not agree with). “We’re going to have real problems with subjective intent,” Bellamy said. David Bodney, an attorney who focuses on media and constitutional law, asked whether the working definition of disinformation could end up including hyperbole or satire, like John Oliver and political cartoons.

Advertisement

The task force is also looking into whether it might propose state or local legislation requiring foreign agents to identify their content to the public. It’s unclear how this might differ from the Foreign Agents Registration Act, a federal law, which already requires certain foreign agents engaged in political activity to make occasional disclosures of their relationship with the foreign government. (For instance, the Russian-backed media organization RT America registered as a foreign agent in 2017.) Nash says the committee has not yet identified what exactly a foreign agent is, but he thinks it’s worth discussing whether people locally or abroad who are representing government should be required to identify that content is sponsored by that government, or whether Twitter accounts (both individuals and bots) created by foreign agents should identify themselves as such.

Advertisement
Advertisement

Given the questions of the First Amendment, foreign agent identification, and more, it’s clear that the Arizona Task Force on Countering Disinformation has many details to iron out. What happens when the committee submits its recommendations in fall 2020—when the run-up to the presidential election will presumably have us all facing disinformation—will be critical. This could serve as a model of best practices for other state courts to follow. But—as is so often the case with attempts to protect people from propaganda, whether it comes from platforms or government—there is potential to deter controversial but legally protected speech.

Advertisement

In our conversation, Richards borrowed language from Justice Oliver Wendell Holmes, a dissenting voice in the 1919 Supreme Court decision to uphold an Espionage Act amendment making it a criminal offense to urge the curtailment of the production of materials necessary in the war against Germany. He said we need to be “eternally vigilant” to assure that “efforts to police this information don’t themselves become tools of political oppression, or the exertion of political power to chill.”*

Correction, Dec. 18, 2019: This article originally misquoted Supreme Court Justice Oliver Wendell Holmes. Holmes famously said, “We should be eternally vigilant against attempts to check the expression of opinions that we loathe,” not “utterly vigilant.”

Correction, Dec. 26, 2019: This article originally misidentified Dave Byers as the director of the State Bar of Arizona. He does not hold that position. It also suggested that the State Bar of Arizona was involved with the creation of the disinformation task force. It was not.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement