My job title is content creator. I get paid per post—on TikTok, Instagram, or YouTube. Viewers watch my content for a few seconds, perhaps are briefly entertained, and then swipe next. I am called an “influencer,” which means I am paid to showcase products in my content, thereby influencing people to buy them. I am referred to as “the talent” in contracts in which my manager negotiates my rates and duties. I have 1.3 million followers across all my platforms.
For each item, I get a package in the mail full of product and a campaign brief in my email inbox. I am compensated usually $2,000 to $8,000 for executing the brief. (One time I was paid $18,000 for a single TikTok, but that was a shocking anomaly.) The amount changes depending on the platform. The videos are 15 to 60 seconds long. I am instructed to mention specific details about the product and when to add text overlay. Absolutely no profanity ever. The campaigns require that I dig deep into why I love the product: It transformed my skin, I have more energy than ever before, it’s the most comfortable shoe I’ve ever worn. Always hyperbolic.
They tell me to be natural and not to stray too far from my authentic self. I edit and splice the video so that there are no pauses in my speech. I have zest and zeal that teeters on mania. My mouth is clenched into a smile for the entirety of the clip, jaw flexed and bulging. People enjoy this. A smiling girl effusing about her love for exercise on the internet is not only palatable but highly lucrative, I’ve learned.
I decide to be vulnerable—often celebrated as “real” on the internet—and make a video about my struggles with clinical depression. I post it and watch its upload progress, jumping from 17 to 83 percent in a matter of seconds. One hundred percent. It’s live. Within five minutes, someone comments that they don’t like the video because it makes them sad; I delete it immediately. What was I thinking? That’s not my brand. The next day I post a tutorial on how to make my favorite kale smoothie. People demand the name of the pea protein powder I used. Someone else comments that they love me so much.
I’ve been on the other side of influencing, of course—a victim to its insidious powers. I’m mindlessly scrolling, and then a bright color, beautiful face, or particular texture catches my eye. I tap on the photo and am pleased to discover the product is tagged. Suddenly I have a digital shopping cart and hopeful visions of my future self. I’m not buying a $70 shirt. I am buying poreless skin, a flat stomach, long legs. I’m buying a type of beauty that I will never have. I recognize (and worry) that when I post images of my own body on social media—advertising clothes or powders or lotions—I may be the very source of toxic thought spirals of body comparison for other women.
The shirt arrives the following week. The sleeves are too narrow, and the fabric is itchy. I don’t bother returning it; a trip to the post office feels wildly inconvenient. So I throw the garment into my closet. I will try it on occasionally and always take it off, choosing another top. The shirt will come to represent my eternal dissatisfaction with my body. I will take it to Goodwill in a few years when I have finally accepted that I will never lose the 5 to 7 pounds that would make it fit.
A few months ago, I was a guest on a podcast. The host introduced me as a “rising TikTok star” and a “symbol of body positivity.” The latter was news to me and I know she meant it as a compliment, but the remark stung. She told me I inspire other women to be confident in their normal, healthy-looking bodies, that I demonstrate how exposed ribs are not a prerequisite to feeling beautiful. On a conscious level, I know this is a good thing. But I think there will always be a part of me that wishes I looked like the women who monopolize my own Instagram and TikTok feeds—sinewy, svelte, smooth, perfect.
I listen to NPR on my drives from Rhode Island to Boston, where I attend classes for my creative writing MFA program. I hear rhythmic, trustworthy correspondents compare Big Tech to Big Tobacco. They tell me that members of Congress sit in wood-paneled rooms and argue that the industry needs to be legislated. Teens are being infected. America needs to protect her children.
I turn my blinker on and shift into the passing lane. Free will, agency, autonomy, freedom—these ideas float around in my head while senators argue that Facebook ravages our democracy because it allows fake news to be shared, multiply, and ultimately gain power.
Recently, however, the conversation has shifted away from how these technologies affect our politics to, more importantly, our children. “What data do they collect on these kids? What do they know? How much money are they making off of them?” I heard Sen. Amy Klobuchar ask in an interview. She then shared an anecdote about a teenage boy who fell and broke his tooth. He used Snapchat to get painkillers, took one pill, and died. He thought he had bought prescription Percocet, but the pills were laced with fentanyl. “No parent should have to bury their kid,” Klobuchar said in her concluding remarks calling for Snapchat to be regulated.
Klobuchar didn’t mention America’s opioid crisis or the lapses in the American health care system that also contributed to this tragedy. I am not a staunch defender of Big Tech. I just think if we are going to critique coercive platforms, we should at least do so without coercive storytelling.
Recently my mom found out via Facebook that her childhood friend’s daughter died. The girl’s father—my mom’s friend—changed his Facebook profile picture to one of his late daughter, smiling, tanned, beautiful, only 19 years old. People commented on the photo that they were praying for him and the rest of his family. Some users included a 😢 emoji in their Facebook condolences, which felt cruel and inappropriate even though they were just well-meaning boomers bumbling on the internet. My mom reached out to their mutual friend, asking what happened. He responded, “Jenna killed herself.” (I’m using a pseudonym here.)
“It’s just horrible,” my mom said, tearing up.
“I mean. What makes a 19-year-old do something like that?” my dad asked in the kitchen.
No one answered. No one knows.
Later that night, I lay in bed and looked Jenna up on Instagram. Her account was public. I scrolled through the last pictures Jenna posted to the internet before she died. Her profile was carefully curated. It was clear she had worked tirelessly to cultivate a brand—each photo was edited with the same bright, iridescent filter, giving her grid an eerily cohesive aesthetic. This didn’t surprise me. Most Gen Zers I know have a distilled style on the internet. I zoomed in on Jenna’s face—her perfect skin, her plush lips stretched wide into a dazzling smile that exposed her pristine, straight, white teeth—and I tried to see anything in her eyes that would indicate she was unhappy. But I just saw what seemed to be a joyful college freshman.
Frances Haugen, the most recent Facebook whistleblower, testified before a Senate committee and brought forth thousands of pages of confidential documents that suggest Facebook is acutely aware that its platforms (which include Instagram) harm children and teens—people like Jenna. According to Haugen, Facebook purposefully ignores this information because, ultimately, it profits from this harm.
One of the internal studies that Haugen leaked specifically focused on teen girls. It suggested that teen girls experience an increase in suicidal thoughts after using Instagram. Other studies focused on Instagram’s detrimental effect on eating disorders and body image issues. Seventeen percent of teen girls said that their eating disorders worsened after Instagram use, and 32 percent reported that the app made them feel worse about their bodies. Again, I think of free will, agency, autonomy, freedom. I also think of Jenna. Did she feel free? When I looked at Jenna’s Instagram account, it was impossible not to notice how often she showed off her small waist, large breasts, and long limbs. I couldn’t help but think about how I have similarly put my body on display on the internet. This act of apparent confidence always comes with an emotional cost. Before posting, I overanalyze and scrutinize every inch of my body. I become riddled with insecurity and often wallow in moments of self-loathing. The leaked Facebook studies suggest that the portrayals of false perfection that saturate Instagram are psychologically harmful. However, for me, those harmful feelings—of worthlessness, body dysmorphia, loneliness, etc.—don’t seem to arise as much when I am consuming others’ content and their personal portrayals of false perfection as when I am trying to present the false perfection of myself. As I looked at Jenna’s page, I wondered if she felt the same way.
I follow several accounts on TikTok and Instagram dedicated to amplifying the idea that the internet isn’t real. One of the accounts recently exposed an editing software that allows creators to lengthen their legs, shrink their waists, smooth over textured skin, and re-sculpt their faces in not just photos but also videos. The creator posted two videos—one edited and one unedited—and both appeared totally normal and real even though her body in the edited video was entirely modified. I think that accounts like these are helpful. They remind me that the people and things I compare myself with on the internet are altered and sometimes completely imagined.
We often consume edited and fictional content unknowingly on social media. We forget that people can simply make things up and present it as truth. We—or I should probably just speak for myself here—I don’t browse these platforms expecting to be deceived, and yet I am deceived constantly. The imaginary and fantastical are presented as reality. This feels quite dangerous.
Social media has exposed the fragility of truth. Often the spread of misinformation on these platforms is discussed in the context of politics and a threat to our democracy. But the more time I spend on the apps as both a creator and consumer of content, the more the threat of misinformation feels personal—it threatens the way, consciously and subconsciously, I perceive myself. And the internal study Haugen exposed showed that I am not alone. But truth and fact are not the priority for social media platforms—because truth and fact are often not as interesting as fantastic fiction. It doesn’t make financial sense for the algorithms to promote what is most true; it makes sense for them to promote what is most entertaining. This would be entirely fine if the consumers of this entertainment didn’t so often use the platforms as viable sources of factual information.
There is a stark dissonance between what social media platforms provide and what the users of these platforms believe they receive. We, users, think we are getting truth when a piece of content is presented as such. When I see a fellow fitness influencer post a picture of their fit, slender body, I assume that’s what their body actually looks like. Unfortunately, this assumption is often inaccurate—images of bodies, especially of people with large followings, are frequently modified on these platforms, edited or airbrushed to perfection. But this is only the tip of the iceberg. That same influencer who flaunts their svelte body in photos—edited or not—may also preach methods (diet and exercise) by which they achieved this body that are vastly, or even subtly, untrue. That same influencer may claim that their diet and its aesthetic outcome improved their mental and physical well-being when it did not, and in fact may have been detrimental to both. Even something as simple as the influencer smiling while showcasing their body, suggesting that their body makes them happy, is potentially fraudulent messaging. On a platform where so much information is implied, the opportunities for subtle misinformation are endless.
What is clear is that this misinformation is harmful to its users—particularly the ones most vulnerable, children and adolescents. The question we’re left with is, what could the platforms do to protect their users? And do they even have a responsibility to protect us? Unfortunately, it seems that upholding that responsibility comes in direct conflict with what generates growth and money for these companies. It is not in social media’s best interest to censor and to only allow that which is 100 percent true, because these platforms serve first and foremost to entertain. Entertainment retains users, increases watch time, generates revenue. Truth is not as powerful. Whether the creators of Instagram, Snapchat, Facebook, or TikTok intended for this dissonance—between what the platforms provide and what users believe they are getting—to exist is irrelevant. The fact is that it does exist—users are constantly being misled, deceived, and affected by misinformation. This, I believe, is the responsibility of the platforms. Wouldn’t it make sense for Instagram to develop a feature that, for example, fact-checks posts or scans for photoshopped bodies?
I write all of this with low-grade guilt. Of course, I understand the hypocrisy of my own words—that I am calling for the regulation of an industry from which I directly profit. Today I have two brand campaigns to complete. One is a TikTok video and one an Instagram picture. I am going to walk to the beach and take photos in bright, stretchy clothes and smile and twirl, and hopefully if I smile and twirl enough, people on the internet will tell me I have “good vibes.”
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.