Future Tense

The Lunacy of Banning TikTok From University Networks

A hand holds a smartphone with a TikTok logo; a houseplant is in the background.
Mourizal Zativa/Unsplash

My students simply rolled their eyes when I brought up Texas public universities’ decisions to ban TikTok from their networks this week.

I’d seen these eyerolls before. They appear every semester during the privacy portion of my communication law course—the part when I tell them TikTok’s data-gathering and powerful algorithm are a privacy concern, and they should think twice about using the app.

They immediately dismissed the government’s effort to block TikTok from university networks—just as they do my privacy advice. (I have teenaged sons. I’m used to being ignored.) Their peers will just access TikTok using their data, the students told me.

Advertisement

My students aren’t the only ones rolling their eyes about the TikTok bans.

They are appearing under the auspices of national security concerns because TikTok is owned by ByteDance, a Chinese firm. But blocking TikTok harms the educational, research, and free-expression and inquiry missions of public universities—while doing little to address the problem. This type of ineffective political theater creates a lose-lose situation for everyone.

Advertisement
Advertisement
Advertisement

Several Texas universities, including the University of Texas, joined the lose-lose crowd when they banned TikTok earlier this week. Oklahoma, Auburn, and Alabama did so late last year.

The bans have come in states where governors, like Texas’ Greg Abbott, have blocked TikTok from state-issued computers and phones. Employers can generally exercise control over how employees use the equipment they issue to them. The move to block TikTok on public university networks, however, crosses a line. It represents a different type of government regulation, one that hinders these institutions’ missions.

Advertisement
Advertisement

The bans limit university researchers’ abilities to learn more about TikTok’s powerful algorithm and data-collection efforts, the very problems officials have cited. Professors will struggle to find ways to educate students about the app, as well.

Many, as my students suggested, will simply shift from the campus Wi-Fi to their data plans and resume using TikTok on campus. In this regard, the network bans create inequality, allowing those who can afford better data plans more free expression protections while failing to address the original problem.

Crucially, TikTok isn’t just a place to learn how to do the griddy. It has more than 200 million users in the U.S., and many of them are exercising free-speech rights to protest and communicate ideas about matters of public concern. When the government singles out one app and blocks it on public university networks, it is picking and choosing who can speak and how they do so. The esteem and perceived value of the speech tool should not factor into whether the government can limit access to it.

Advertisement
Advertisement

The Supreme Court has generally found these types of restrictions unconstitutional. Justices struck down a North Carolina law in 2017 that banned registered sex offenders from using social media. They reasoned, “The Court must exercise extreme caution before suggesting that the First Amendment provides scant protection for access to vast networks in that medium.” Years earlier, the court struck down a law that criminalized digital child pornography. It reasoned lawmakers “may not suppress lawful speech as the means to suppress unlawful speech.”

Advertisement
Advertisement

Nearly a century ago, the first instance in which the Supreme Court struck down a law because it conflicted with the First Amendment came in a case that involved a blanket ban by government officials on a single newspaper. The newspaper was a scourge to its community. It printed falsehoods and damaged people’s reputations. Still, justices reasoned the First Amendment generally does not allow the government to block an information outlet because it threatens the “morals, peace, and good order” of the community.

Advertisement
Advertisement

Each of these laws, while put in place by well-meaning government officials, limited protected expression in their efforts to halt dangerous content. The First Amendment, however, generally doesn’t allow government officials to throw the baby out with the bathwater. Any limitation on expression must only address a clearly stated government interest and nothing else.

Advertisement
Advertisement

So, what is the government interest in blocking TikTok? Perhaps the most coherent statement of TikTok’s perceived national security threat came from FBI Director Chris Wray in December. He emphasized, because of China’s practice of maintaining influence in the workings of private firms who do business in the country, that Chinese officials might manipulate the app’s powerful recommendation algorithm in ways that distort the ideas Americans encounter. American TikTok users might see pro-China messages, for example, while negative information might be blocked. He also averred to TikTok’s ability to collect data on users and create access to information on users’ phones.

Advertisement

The University of Texas’ news release from earlier this week parroted these concerns, noting, “TikTok harvests vast amounts of data from its users’ devices—including when, where and how they conduct internet activity—and offers this trove of potentially sensitive information to the Chinese government.”

These are valid concerns, but apps such as Instagram, Twitter, Snapchat, and YouTube also harvest vast amounts of data about users. Their algorithms do far more than simply supply information. Facebook’s and YouTube’s algorithms, for example, have both been found to encourage right-wing extremism. They are, as Wray and Texas’ news release lamented regarding TikTok, distorting the ideas Americans encounter. Why aren’t we blocking them, too? The obvious answer is that none of these companies are owned by a Chinese firm. But can’t firms such as Meta, Twitter, and Google execute the same harms officials have listed from within the U.S.?

Advertisement
Advertisement
Advertisement

Facebook did little to stop Cambridge Analytica, a non-American political firm, from harvesting thousands of pieces of information about 50 million Americans to help Donald Trump’s presidential campaign in 2016. The firm used that data to target Americans with extremely specific, and at times false and misleading, political messages. It’s debatable whether that data ended up helping Trump’s campaign. Either way, this American-based app didn’t pose a potential threat to democracy—it was one. It’s never been blocked on public university networks.

Let’s consider Google. Is there a company that knows more about us? Google records browser and YouTube search histories, owns Fitbit—which has stored more than 30 million Americans’ biometric data—and often has access to our locations. Google’s in-home products know everything from what temperature we keep our homes to who’s ringing our doorbells. While there is no evidence Google services are purposely leaking or manipulating our information to benefit those who might harm democracy, the tech giant admitted to a data breach in 2018 that exposed half a million users’ personal data. Nearly every tech firm, including personal identity protection firm LifeLock, has been breached. Who has access to all the stolen personal information resulting from these breaches?

Advertisement
Advertisement
Advertisement

Ultimately, American-based tech firms collect, track, and share massive amounts of Americans’ personal data and have powerful algorithms that can distort the flow of information in ways that endanger democracy—just like TikTok. While TikTok is unique in its Chinese ownership, we have ample evidence American-based firms meet the criteria on government officials’ lists of concerns about the popular app.

There are ways to address the real threats TikTok might pose without banning it altogether. Identifying those on their campuses who have research grants that require certain security clearances and focus protections there, for example, would do far more to address the problem while avoiding undermining free expression and other institutional values. Those working on these types of projects could be placed on a separate network or have more stringent restrictions on their access to apps that are associated with national-security concerns. Whatever the solution, the goal is to create narrow solutions to a specific problem.

More nuanced solutions to these national security concerns might not save us from eyerolls from college students, but they could actually address the problem—and do so without undermining public universities’ missions and free-expression protections.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement