I had heard this story before: the one where someone’s tweet goes viral and is rapidly embedded in media stories around the globe. In fact, it’s a story that had been at the center of a court case I’d been handling for the past few years.
But this time, it was my tweet, and my video. And the view from the other side, as it turns out, was pretty disorienting.
On Monday afternoon, at around 1:40 p.m., I was sitting at my desk when I heard the sound of a helicopter flying very low. Soon, it practically sounded like it was landing right outside my office window, on the 21st floor of a midtown Manhattan high-rise.
The noise ended as quickly as it had begun. I stood up and went to the window—conditioned to do so, as are many of us who lived through September 2001—and looked up. At the top of a building a block north, I saw a flash of fire and a plume of smoke.
I’m really not certain why I took out my phone and started filming. Partly, you think, someone might need to see this at some point. But also, there’s a feeling of helplessness seeing something occurring in the distance, and you just want to do something. I recorded for a little more than a minute, describing what I had heard and seen earlier. Then I stopped filming and called 911, annoyed with myself that I had not done that first.
Then I sat back down and did what most of us now do when we get word of an incident—I began Googling. Not finding much, I decided to post my video. Here’s what I posted:
What I had witnessed was the crash of a helicopter flown by Tim McCormack, an experienced pilot, longtime firefighter, and the sole victim of the accident. By all initial indications, McCormack prevented what could easily have been an even more tragic event by setting his helicopter down hard on the roof of 767 Seventh Ave. If so, McCormack is a hero, and his last act was one of selflessness.
My own place in this story is far less significant—deciding to make that Twitter post hardly constituted an act of bravery. Nonetheless, the consequence of that decision highlights how, as bystanders with powerful recording and broadcasting machines that fit in the palms of our hands, our rights, responsibilities, and expectations have changed. For a lawyer whose main area of practice has been at the intersection of technology and media—in that uncomfortable, often legally gray zone where innovation collides with tradition—it was especially illuminating.
The news business has been occupying that gray zone for a long time, a place where the rules, both legal and market-based, shift constantly. One of the issues for news organizations like Slate—and, frankly, all media companies—is when and how to use online content created by third parties.
Over the years I’ve seen the power of viral media. I’ve represented individuals whose creative content was turned into memes and became so widespread that they were co-opted by advertisers. I’ve represented authors whose entire works were uploaded onto sites where they could be read for free. But I’ve also defended media companies that used newsworthy content created by others against claims of copyright infringement. And, yes, I’ve represented technology companies that enable all of this, whose platforms make the exponential virality of content possible.
This time, it was my content in the middle of it all. Within minutes of posting the video, my phone started buzzing with Twitter notifications. Nine minutes later, I received my first inquiry from a news organization, asking for permission to use the video. Then came the interview requests.
Those requests—many, if not most, from clients—came in so rapidly that I had to stop responding. So I posted a new message:
By day’s end, the video had 1 million views on Twitter. I had been contacted by dozens of reporters, and because I freely gave usage permission, it had appeared on most major U.S. news sites and some abroad. I only did a handful of interviews, because at a certain point the attention began to feel ghoulish.
In the meantime, something else had emerged on my Twitter feed: a discussion about my rights as a creator. Soon after I uploaded the video, someone posted a warning that I should not give permission to news organizations to use it, because they would make money from it and not me. This sparked quite a debate.
One of the most radical shifts in media over the past 20 years is the rise of user-generated content. We as humans have been creating content since we first began painting pictographs on cave walls. But when a single smartphone video can receive more views than a carefully crafted advertising campaign, perhaps the rules of the game around content publication must change. But how? The questions are important—and difficult. Who gets to control the use of my content, especially if I publish it openly on social media? What if it relates to a newsworthy event? Who, if anyone, gets paid for its use, and under what circumstances? Can I put the content genie back in the bottle once I’ve released it to the world?
These are questions that news organization and content companies now grapple with every day, questions I’m increasingly grappling with as a lawyer. But these are questions that individuals are being forced to confront every day as well, and that is something profoundly new.
In any event, it’s one thing to see the impact of viral dissemination happening to others. It’s another thing entirely to be swept up in it yourself and to see how quickly something you’ve created can be blown about the world like a feather in a hurricane.
In fact, over the past two years I’ve been involved in a dispute that deals with the very issue debated in my Twitter feed: the use of an individual’s tweet, with a photo, that was subsequently embedded in multiple news stories.
In that case, Justin Goldman took a photo of Patriots quarterback Tom Brady on a sidewalk in East Hampton, apparently attempting to help recruit Kevin Durant to the Celtics. Goldman posted the photo to Snapchat, and then others tweeted it out. A number of sports news outlets, including a few I represented, published articles about Brady and embedded the tweet in their articles. Goldman sued some of them for copyright infringement. We argued there was sufficient legal authority indicating news organizations could embed third-party content without incurring liability. However, the judge ruled against us, allowing Goldman to bring an infringement claim.
It won’t be the last case in which these issues play out. But the upshot was that many media companies had to change their practices, trying the best they could to determine who owns social media content. Hence, all the requests I received. In my case, the answer was easy: I shot the video, so I owned it and could give permission for use. If I had posted someone else’s video, on the other hand, it might not have been so easy to find the owners and clear content for use, especially against a ticking news clock.
What about my choice to freely license my video? For me, it was also pretty simple. I firmly believe that it’s important for the press to be able to report on newsworthy content—though it was gratifying, and appropriate, that in most instances I was credited when my video was used. Still, I only gave permission for news reporting purposes so that in theory I could retain some control over future uses.
In a fast-moving, fluid situation, that was the best I could do. But I’m a lawyer with two decades of experience in such matters. The nuances of such rule setting are going to be very different for different people with different content in different circumstances.
The bottom line is that the use of content created by others needs to be fair, in the circumstances. If a news organization reports on a breaking story, “fair” likely means using whatever portion of the content is needed to tell the story, crediting the owner. If the use is for some commercial purpose that doesn’t directly serve an important societal interest, “fair” may mean payment. This is why the “fair use” concept in copyright law, which incorporates First Amendment concerns, is so important, messy as it often is in its application.
We all also need better clarity around how and where content is used, and by whom, as well as clearer data on who created it. We need to better equip our courts and governments to address technological transitions more rapidly so that individuals and companies have guardrails in place to help them make decisions. Revamping our institutions takes significant political and social will, and it won’t be easy. But if we don’t do so, the pace of change is going to leave them behind. Part of the solution will be technologies that can help us better manage the fairness in all this. There are solutions under development—some using blockchain and artificial intelligence—that may allow us to better know the provenance of content and follow it throughout its life cycle. If we can do that, then perhaps we can credit people properly and pay people effectively, depending on the use. And reduce some of the disorientation.
Now that would be a media ecosystem worth striving for.
Some of the author’s colleagues do legal work for Slate.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.