Social Networks Need Clearer Terms of Service

Research shows how hard it is for users to know how their information is shared.

Photo illustration by Slate. Photo by Carl Court/Getty Images

Photo illustration by Slate. Photo by Carl Court/Getty Images

Earlier this month, Facebook added language to its Platform Policy for developers that specifically instructed, “don’t use data obtained from us to provide tools that are used for surveillance.”

This clarification followed an American Civil Liberties Union report late last year saying that a company called Geofeedia was marketing its social media–monitoring product to U.S. law enforcement as a tool to keep an eye on protests. In an email from Geofeedia to a potential police department client, which the ACLU obtained, the company boasts about how its special access to Facebook, Instagram, and Twitter data could be used to monitor protests. Geofeedia said that its system allowed it to have “covered Ferguson/Mike Brown nationally with great success.” It could access a vast amount of public posts, potentially in real time, allowing for the company to isolate posts and users in specific protest locations. In a case study document, the company also states that during the 2015 protests in Baltimore following the death of Freddie Gray, police officers were able to run facial recognition technology on social media photos to identify individuals with outstanding warrants and “arrest them directly from the crowds.”

It’s a terrifying idea. Black Lives Matter movement activists prominently used these services to document and publicize protests. Additionally, people of color, particularly activists, have historically been disproportionately targeted for surveillance by U.S. law enforcement. But Facebook and Twitter shouldn’t have been surprised. Law enforcement monitoring of social media during protests is also not particularly new, and the Snowden NSA surveillance revelations also included evidence of private companies offering social media–monitoring tools to law enforcement, in some cases specifically providing examples of how they could be used for monitoring activists and protests.

Facebook and Twitter ended their relationship with Geofeedia after these allegations came to light, and Facebook’s recent clarification of its terms of service better cements this position as a formal policy. But the incident highlights a larger issue. Geofeedia is hardly the only company feeding social media data into its surveillance tools, selling them to various law enforcement and government authorities, or marketing them specifically for deployment during protests. Incomplete or vague policies governing use of user data leaves room for third parties to abuse their access. And without company disclosure about the steps they take to detect and prevent such abuse, users are left in the dark about how—or whether—their privacy rights are respected.

On Thursday, Ranking Digital Rights released its 2017 Corporate Accountability Index, which examines public commitments and policy disclosures relating to freedom of expression and privacy from 22 of the world’s leading internet, mobile, and telecommunications companies. (Disclosure: I work with RDR and worked on this report. RDR is a project of New America, which is a partner with Slate and Arizona State University in Future Tense.) The 2017 index found that companies don’t give users enough information about company policies and practices to make informed choices about which services to trust with their personal information. Companies often have vague or incomplete disclosures about the types of information they collect from users, why and how this information is being collected, and with whom it is shared.

And even if users are able to discern some information about how a company is collecting and using their data, it’s often even more unclear what they can actually do about it. For example, Facebook’s data policy details the types of user information it collects, how it uses the information, and how the information is shared. However, the description of how users may manage or delete information about them is limited because it only covers content users actively share—not information the company collects about them, such as the types of content users view and interact with, the people and groups a user is connected with, and user information Facebook collects from third-party sites.

We also found that with many companies, there’s limited information about what happens if you decide to quit a service and delete your accounts. It’s not always clear whether the company will actually delete your data, or how long you need to wait for it to do so. For example, Google’s help page for recovering a deleted account says that users have “about 2-3 weeks to recover” their accounts, but it doesn’t indicate whether this is because account information is permanently deleted after this time.

In an emailed statement following the public outcry and revocation of their commercial access to social network data, Geofeedia CEO Phil Harris stated the company wasn’t “created to impact civil liberties” and that it was changing directions. But that’s beside the point. Regardless of intent, companies that are entrusted with or given access to user data can have significant implications for users’ human rights, and their terms of service and other policies need to take that into account.

The Ranking Digital Rights index shows that companies need to do more to prevent these kinds of privacy abuses. One solution is to conduct regular human rights impact assessments evaluating how all aspects of a company’s business—including its relationships with private third parties and their access to user data—affect freedom of expression and privacy. Human rights impact assessments should be conducted not only for existing services, but also before launching new ones or entering new markets. Before entering business relationships with third-party companies like Geofeedia, social network providers should assess the privacy risks involved in granting them greater access to user data, and evaluate whether or not they should have greater access to user data in the first place.

As Nicole Ozer, technology and civil liberties policy director for the ACLU of California, put it, “The ACLU shouldn’t have to do a public records act request to tell Facebook and Twitter what their developers are doing.” Though Facebook’s policy clarification is a step in the right direction, Facebook, Twitter, and other companies should back up such policies with enforcement. Social media companies must be more transparent with their users about the steps they are taking to crack down on surveillance tool developers like Geofeedia, and provide evidence that these commitments are being implemented.