It was hard to walk away from last week’s TikTok hearing with much optimism. Over the course of four hours, lawmakers grilled CEO Shou Zi Chew about content moderation, the mental health impacts of social media, data privacy, cybersecurity, and the relationship between the app, its parent company ByteDance, and the Chinese government. Near the start, Rep. Frank Pallone of New Jersey said that he wasn’t convinced that “the benefits outweigh the risks that [TikTok] poses to Americans in its current form.” Indeed, during those four hours, it would have been easy to forget that TikTok provides any benefits at all.
As I watched, I kept coming back to a simple truth about our lives online: Social media is bad for us. Social media is also good for us—both of these things can be true at the same time.
This may seem obvious, but our conversations about tech regulation rarely accommodate for that truth. I’m a tech ethics and policy professor, and a lot of my research focuses on this tension. I also have more than 100,000 followers on TikTok, where I teach people about the same kinds of issues that Chew was grilled about last week.
The problem I saw in a lot of these lines of questioning—and much of the discourse about TikTok in general—was a fundamental misunderstanding of how TikTok is used by many, many people. Focusing on the risks of the app without contextualizing those risks within the benefits leads to flashy, absolutist proposals like bans without considering more middle ground proposals (for example, bans on government devices and stronger privacy protections or design changes). When it comes to social media in general, it’s critical to identify and deeply understand harms so that we can do everything we can to mitigate them—while also not undermining the unquestionable good that can be found on social media.
I suppose before I start defending TikTok I should disclose that I have made a total of $468.72 from the app’s creator fund in two and a half years. The content I’ve created and shared represents, for example, 130 videos about A.I. ethics, 54 videos about data privacy, 44 videos with advice for Ph.D. applicants, 40 videos about life with Type 1 diabetes, and 38 videos about content moderation. There is exactly one video of me doing a TikTok dance.
I started on TikTok in 2020 because, as with my YouTube channel, I wanted to reach potential STEM Ph.D. applicants to help level the playing field with my advice. I didn’t think it would work—after all, who would go on TikTok and search “Ph.D. statement of purpose”? But my third video ever received nearly 50,000 views overnight, and I woke to dozens of comments from Ph.D. applicants thanking me for the help. I’ve never been more impressed by a recommendation algorithm; I helped more people in one day on TikTok than I had on YouTube in four months.
For several months after that, my content focused on grad school advice and academic humor, and then one day a video about possible racism in technology crossed my feed. I decided to “stitch” the video to explain how a lack of diversity in training data can result in racial biases in machine learning. Very quickly, this became my most viewed video, and I suddenly realized the power of TikTok as an education platform. My content shifted in that direction, and I even decided to teach my university-level tech ethics and policy class on TikTok, 60 seconds at a time. I maintain an online syllabus with readings and links to videos on the same topics that I teach in my class—and I reach a much wider audience than the 40 students who attend my college class in person at any given time.
Counter to what its fiercest critics seem to believe, even if TikTok was designed as an entertainment platform, it has become much, much more than dancing and dog videos. The reality is that everyone is learning on TikTok, whether they like it or not. One of my favorite comments I’ve ever received on a video was “I didn’t come on this app to learn things today, but thanks.” From STEM to books to anti-racism to life hacks, the app has created an expansive definition of learning. Its recommendations not only pinpoint what we might want to learn, but also what type of content might engage us. I follow an allergist who hula-hoops and two different creators who teach math in drag. In December, I decided to throw on a Santa hat and teach about A.I. bias and predictive policing with a Night Before Christmas parody.
TikTok also helps people find community, sometimes in critical ways. A growing body of research shows how TikTok can be an important source of social support, including for young people. Part of the magic of the TikTok recommendation algorithm is its ability to bring people together because it really can push people toward what is “for you.” This push is also what sets TikTok apart from other social media platforms in its ability to help creators find an audience.
Platforms like Instagram, YouTube, and even Twitter are also trying to surface the content that is most “for you,” but TikTok unquestionably has a secret sauce that other platforms don’t (this may be due as much to early design innovations as the algorithm itself). Regardless, there is a reason that I have four times as many followers on TikTok as on Twitter, despite having been on Twitter for more than a decade. And TikTok allows me to reach a greater diversity of people, too—much of my TikTok audience is just curious scrollers, not necessarily the sort who would normally seek out scientists on Twitter or take a college-level tech ethics class. These benefits are also magnified for people who are in less privileged positions than I, who otherwise might not have a platform. TikTok has become a hub for racial justice movements for a reason.
Of course, the same features that can help people find audience and community can also send them down more nefarious rabbit holes. Also a significant problem on other platforms such as YouTube, the issue of TikTok recommending toxic, inappropriate, or false content was a constant refrain during the congressional hearing, as was the challenge of policing that content. Some of lawmakers’ questions implied fundamental misunderstandings of how content moderation works (and what the challenges are) on all social media platforms. But other related questions at the hearing were thoughtful: Rep. Tony Cárdenas of California asked how many Spanish-speaking moderators are on the platform, and Rep. Yvette Clarke of New York asked about content moderation bias against Black TikTok creators.
These are real concerns. In fact, most of my content about TikTok is heavily critical, on exactly these topics. Recommendation algorithms pushing harmful content, ineffective moderation, lack of privacy controls, targeted advertising, children’s access to inappropriate material, harmful mental health impacts on teenagers, the potential for algorithmic manipulation, data security … we’ve heard about these before, in similar hearings with Mark Zuckerberg or Jack Dorsey in the chair. Most of these issues can and should be addressed with design changes, regulation, or both. But for the government to actually tackle these challenges, we need thoughtful legislation, such as federal data privacy laws—which if properly executed would decrease the potential for data misuse by not only foreign entities but also bad actors within this country and big tech companies themselves. Such laws would help all platforms do better, not just TikTok.
Of course, its relationship to China does set TikTok apart from other platforms and compound perceptions of risk to user privacy. I am not an expert on national security, and though I am concerned about government overreach in the RESTRICT Act even beyond TikTok, if there are actual risks they shouldn’t be ignored. But I hope that as our lawmakers continue to consider those risks, they take the same care to understand the benefits of the app for the individuals and communities who have found one another there. And I hope that this conversation leads us toward meaningful regulation that will get us to more of the good and less of bad, far beyond just TikTok.
In the meantime, I’ll keep making videos that critique TikTok as well—because as someone who understands the value the app brings to millions of Americans, I would rather see it better than gone.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.