As I watch the videos and images pouring out of Ukraine, I am reminded of a conversation I had once with a colleague while I worked at YouTube. My job entailed writing the platform’s policies for political extremism and graphic violence, and during high-profile conflicts in war zones, terrorist attacks, and other sensitive moments, I had to help decide what content would stay up and what would not. After one particularly tense day involving state violence on separatist fighters, I turned, exhausted, to a colleague and asked, “What would we have done if YouTube existed at the start of the Iraq war?” We paused, considering the gravity of the question, and then turned back to the mountain of work in front of us. I knew it was only a matter of time before a full-scale ground conflict would erupt and be recorded from start to finish for us all to consume. (Of course, since its founding two years after the war began, YouTube has had countless videos documenting the American-induced tragedy in Iraq.)
Now, every second of Russia’s invasion of Ukraine has been documented by Ukrainian citizens (not to mention those in diaspora).
Social media platforms have become tools of this war, evidenced not only by the thousands of posts, videos, tweets, and images uploaded but also by Facebook’s decision to ban Russian state media from running advertisements and monetizing their content, out of concern about spreading propaganda. YouTube also announced similar restrictions and went further by announcing it would block access to Russian state media operator RT in Ukraine (based on a request from the Ukrainian government) and would evaluate “what new sanctions and export controls [on Russia] might mean for YouTube.”
Employees at YouTube, Facebook, Twitter, and other platforms, and the army of content moderators buttressing their Trust & Safety work, have unwittingly become historians and archivists of this conflict. YouTube’s policies on graphic violence can dictate, for example, whether a video of a Ukrainian civilian’s murder by Russian forces stays up or is removed. The video platform also said it has already removed hundreds of thousands of channels for violating its community guidelines on coordinated deceptive practices—but what if in the sea of genuine violations, critical evidence of a war crime is removed unintentionally? With the sheer volume of content, do social media platforms truly have the capacity to review appeals accurately and thoroughly? Do they even have sufficient numbers of Russian- and Ukrainian-speaking content moderators? Are companies relying on automated detection of potential graphic violent content uploaded from Ukraine—and if so, are these tools deployed to remove content automatically or only to send material for human review? Removal of certain types of content may come from genuine violations of, say, a platform’s violence or coordinated inauthentic behavior policies, government pressure, or company error.
These questions, tensions, and concerns, while amplified during this recent outbreak of war, are not new. During times of war and occupation, companies’ actions have demonstrated that errors have grave consequences. YouTube and others, while responding to company emergencies related to the proliferation of violent, extremist content on their platforms, erased thousands of pieces of evidence of human rights violations in Syria. Facebook has faced claims of censorship and erasing documentation related to Israeli human rights violations in the occupied Palestinian territories. And after rightful public outcry, companies quickly correct course, touting their oft-repeated apologies and reminders to us all that they sometimes make mistakes.
Whether this war ends in a week, a year, or 10, what do we do with this content? And what do we do with the content that is already circulating from other conflict zones and sites of tragedy on social media? The answer is that we need something new, for both the war in Ukraine and other conflicts to come: an independent database and institution dedicated to preserving this type of content and centering the lived experience and expertise of the people living through this violence.
This approach does not mean to suggest that user-generated videos and photos from sites of violence have no place on social media. The content, as it lives on YouTube, Facebook, Twitter, and elsewhere, serves an important role living on these platforms: a battle for public opinion is unfolding in real time, and the platforms’ sheer numbers of users and the product features designed to facilitate communication, interaction, and dissemination are powerful tools to educate and engage with viewers. Complicating this reality is the fact that companies have their own pressures and policies that may prevent the full extent of uploaded material to remain on their platforms.
At the same time, we find ourselves in a precarious, highly dependent dance with the tech titans: a policy change, an enforcement modification, a poorly trained moderator, an imprecise detection algorithm, an inadequate appeals mechanism can all lead to the erasure of material. Even without these hiccups, depending on the type of footage, companies may not be able to justify leaving up certain pieces of material—for example, an ultra-graphic piece of content—no matter the newsworthiness that accompanies the horrific scene of violence. Even content that does not violate community guidelines is not spared, either. A user’s decision to delete their account brings with it the removal of all their videos, comments, and posts. Factors outside of the platform can also complicate our ability to consume, share, and document this material: government requests for removal—or even more heavy-handed acts such as blocking access to social media—will surely be at play. (Russia’s extensive censorship apparatus aimed at social media networks has already been previously documented.) Russia’s mass media regulator Roskomnadzor could, for example, begin filing requests for Twitter, Facebook, and YouTube to block content locally in Russia or argue that the content should be removed globally.
The other role this content has is in its evidentiary value, a fact open-source intelligence experts know far too well. While content removal has deleterious effects in the short term for Ukrainians fighting to obtain information and for the international community struggling to comprehend the escalating violence, there are also grave questions of any potential erasure’s impact on future accountability efforts. Each video, tweet, or post from Ukraine is not only a broadcast to the world, but also evidence when the international community is ready, hopefully, to make sense of the illegality that is unfolding before us all. In other words, there’s a battle for evidence as well.
Independent databases are not new in the world of social media. The National Center for Missing and Exploited Children, a private nonprofit, has long operated its CyberTipline, which collects images and URLs of child sexual abuse material and other heinous material from technology companies. (Companies are legally required to provide content to NCMEC.) NCMEC then facilitates referrals to law enforcement agencies for further investigation. In 2017, the largest social media platforms launched the Global Internet Forum to Counter Terrorism, an industry initiative that maintains a database of removed terrorist material that companies contribute to and also use to find similar uploads on their own platforms that may have escaped detection. The GIFCT database also runs a “content incident protocol” to deal with breaking events and livestreamed violence after the horrific Christchurch, New Zealand, terrorist attack in which a white supremacist took the lives of more than 50 Muslims during weekly prayers. The GIFCT eventually became an independent 501(c)(3) organization with its own staff not only to oversee the database but also to facilitate knowledge sharing between tech platforms and research in the space.
Other efforts in this space include the International, Impartial and Independent Mechanism for Syria, established in December 2016 following a resolution from the United Nations General Assembly. The IIIM’s core responsibility is to “collect, consolidate, preserve and analyse evidence of violations of international humanitarian law and human rights violations and abuses” while also preparing materials to aid in future criminal proceedings. The IIIM collaborates with a number of stakeholders, including nearly 30 Syrian nongovernmental organizations, to obtain information that may be in these stakeholders’ possession. The IIIM, similar to the GIFCT database, is not open for public viewing or access.
Though the IIIM has helped usher in novel governance approaches to the collection of potential evidence, the body is not without its problems. The IIIM has struggled to secure adequate funding in the past. It is also unclear the extent to which Syrian nationals—who may have the best knowledge of the material the IIIM seeks to catalog—can participate as IIIM staff. While the IIIM cooperates with many Syrian civil society organizations, this centralized data aggregation method may, as Beth Van Schaack notes in her book Imagining Justice for Syria, “threaten smaller documentation efforts whose holdings are akin to their intellectual property.” Finally, the IIIM approach is slow and perhaps not quite easily scalable to be able to begin critical work immediately. After all, the IIIM was created through the U.N. apparatus, is subject to the constraints that are common to bureaucratic bodies, and is tasked to focus on one conflict (Syria) during a specified time range.
The Human Rights Center at the University of California, Berkeley, School of Law has described the IIIM approach as a “hybrid model,” where NGOs, social media companies, and others provide content to a mechanism. The GIFCT structure is more akin to what the center refers to as a “voluntary partnership model,” where content is voluntarily shared “with an external repository.” The NCMEC approach is unique because of the “legal compulsion model” that requires entities to share material with the organization and provides statutory protection to NCMEC to store this content indefinitely.
With the range of database and archive initiatives already explored, it may seem that there is already something in place to handle collecting the evidence millions of Ukrainians are uploading onto social media in real time. The truth is, there isn’t. A “legal compulsion model” would be too slow for a conflict, whereas a pernicious problem like child sexual abuse material, which does not emerge in specific periods of time, is well-suited for platforms to be legally required to provide this material. Additionally, any legal compulsion method would open politically fraught questions through which lawmakers balance geopolitical interests in deciding which conflicts are ones they care enough about to mandate platform cooperation. Companies themselves should not be encouraged to create their own databases on their platforms, because it would take the real concern of platform power to frighteningly new heights, where their decisions can now not only influence evidence gathering but also the entire viability behind prosecuting suspected criminals for serious international crimes.
Human Rights Watch has proposed a “mechanism to preserve publicly posted content that is potential evidence of serious crimes” that could be “established through collaboration with an independent organization that would be responsible for storing the material and sharing it with relevant actors.” What is missing in these discussions—real or theoretical—on evidentiary databases and violation archival is the question of the people who experience a particular conflict, and how they can be better engaged in the documentation, verification, and categorization efforts. In other words, people have as important of a role to play, but we overlook this fact in our reliance on institutional force to do this work for affected communities, often too little, too late.
Right now with Ukraine, and in other situations of bloodshed, we tend to reduce actors into archetypes: the attacker and the attacked, the perpetrator and the victim. While these labels are not inaccurate, our collective reliance on them obstructs our ability to think of victims in multidimensional ways. The Ukrainian citizen-users experiencing the invasion, those surviving or at minimum trying to, can contribute to vital documentation efforts. While civil society groups play important roles in coordinating testimonial evidence, the fact is these often take place after conflict has unfolded. Additionally, with millions of citizens armed with smartphones and recording footage, we could finally start seeing victims of serious crimes as people with agency as well.
What we need is a Unified Public Trust for evidence of human rights content for the people experiencing these crimes and by these same voices as well, a trust that is available for all situations of conflict in the world, known and yet to come. For example, the UPT can issue an alert notifying Ukrainian users that they can begin uploading material to their trust. Ukrainian citizens could then upload content directly to the UPT, where Ukrainian-speaking volunteers can be trained on critical steps in data verification and open-source intelligence to ensure that the material being preserved is authentic and accurate. The UPT can then give access to those most affected: families who had loved ones killed in the unfolding violence, researchers and investigators, academics, and beyond. This decentralized approach harnesses the power of the real experts, the real historians: the people living through the tragedy. The UPT can also work with existing archival efforts from NGOs (like the Syrian archive) and the United Nations directly. And this all can be funded through a variety of sources: individual donations, government grants, and contributions from social media platforms as well.
This model does not suggest replacing social media platforms with the UPT. After all, YouTube, Facebook, and Twitter serve particular needs in times of crisis, most importantly to disseminate knowledge and influence others rapidly. The UPT is not intended to serve this role—its mandate is to appreciate the fickle nature of content moderation, changing policies, and inconsistent enforcement and center people’s lived experiences as a catalyst for accountability.
We’ve seen inspiring scenes of Ukrainians defending themselves against Russian aggression, whether through taking to the streets, protecting their cities, or disseminating calls for aid through social media. The UPT is a chance to give Ukrainians, and countless others living under siege around the world, the opportunity to make history for themselves, and all of us.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.