Future Tense

If Social Media Can Be Unsafe for Kids, What Happens in VR?

Mark Zuckerberg recently predicted that Facebook will be a “metaverse company” in five years. What does that mean for kids?

A kid with long hair, opened mouthed, with their head in a VR headset.
Getty Images Plus

After decades of unfulfilled promises, VR may finally be on the precipice of “going mainstream.”

Nearly $3 billion of virtual reality headsets were reportedly sold during COVID-19 lockdowns. Facebook put its Oculus Quest 2 on the market in October 2020 for $299, the same price as an Xbox Series S. Preorders for the Quest 2 were five times higher than the Quest 1 in 2019, and they sold out at most large retailers before Christmas.

Estimates swing wildly, with reports of anywhere from 26 million to 171 million VR users worldwide today. Even at the low end, it’s obvious why, in June, Facebook Reality Labs announced that it had started testing ads in a multiplayer VR game.

Advertisement

Facebook CEO Mark Zuckerberg said in an interview a month later that he was surprised by how consumers were using VR: to socialize. He’d planned on Quest 2 “mostly being used for games,” he said, and had thought that “a lot of these social interactions … wouldn’t come until later.” He continued, “There are even experiences that I really hadn’t thought about,” boxing, or dancing, for instance, “just hanging out socially.”

Advertisement
Advertisement
Advertisement

“[O]ver the next five years or so,” Zuckerberg went on to predict, Facebook would grow into a “metaverse company,” controlling a global, immersive, shared 3D digital space, including integrated social VR applications.

Zuckerberg’s announcement elicited a flood of responses—including lots of mockery. Between October 2020 and this summer, Facebook had sold 4 million headsets. It also recalled them, because, in the words of U.S. Consumer Product Safety Commission, their “foam facial interfaces can cause facial skin irritation and reactions including rashes, swelling, burning, itching, hives, and bumps.” But from those who thought that his prediction might someday come to fruition, or who understood Facebook’s track record on safety and security, it also prompted questions, warnings, and reminders of the metaverse’s dystopic roots.

Advertisement

In mid-September, the Wall Street Journal published the first article in its explosive series “The Facebook Files,” reinforcing that concern and setting the stage for criticism rivaling the days of the Cambridge Analytica scandal. Soon after, Facebook announced that it was suspending plans for “Instagram Kids” and released a formal statement about its immersive future: “The metaverse won’t be built overnight by a single company,” it read. Rather, Facebook will “collaborate with policymakers, experts and industry partners to bring this to life.” Additionally, it said, Facebook is investing $50 million “in global research and program partners to ensure these products are developed responsibly.”

Advertisement

After Facebook went down for five hours last Monday and whistleblower Frances Haugen, a former Facebook product manager who said that the platform “chooses profits over safety,” particularly for children, testified in Congress on Tuesday, the New York Times’ Kevin Roose asked whether Facebook might, in fact, be dying. But he also ventured that “augmented and virtual reality products … could turn the tide” in Facebook’s favor. Less than a month before, Facebook had announced the release of its Ray-Ban Stories smart glasses.

Advertisement
Advertisement

While it’s unclear whether Zuckerberg’s five-year metaverse prediction will come true—or even whether Facebook might now be looking at a more existential threat—researchers I spoke with say now is a pivotal time for VR, especially when it comes to children. At the moment, most companies, including Facebook, still recommend (not require) that children under 13 not use their VR products. At the same time, VR adoption remains slow, relative to other, less cumbersome technologies. This means there is already a need, but also a window of opportunity to learn from, and to not repeat, the mistakes of the past—the viral hate, misogyny, and racism, harassmentsecurity breaches, voter interference, disinformation, algorithmic biasesconspiracy theories, child exploitation. It’s all the more crucial because, as Mary Anne Franks, president of the Cyber Civil Rights Initiative, noted in her paper “The Desert of the Unreal: Inequality in Virtual and Augmented Reality,” research indicates abuse in VR is “ ‘far more traumatic than in other digital worlds.’ ”

Advertisement
Advertisement

If researchers, practitioners, and policymakers are proactive and intentional, experts say, they can create product development standards and policies based on an evidence-based understanding of the risks of immersive media, before products and services are marketed and released. In other words, it might be possible to get things right this time.

To be clear, VR earns every awestruck accolade. In the words of Jakki Bailey, founder and director of the Immersive Human Development Lab at the University of Texas, Austin, “VR applications have transformed children’s education, reduced their physical and emotional pain during medical procedures, and reduced their anxiety.” There are life-changing applications for treating phobias, nightmares, and PTSD; in gaming and play, the fine arts, filmmaking, education, and religion; for meditation and building empathy, diversity, and inclusion. All of these put to good use VR’s power to blur the line between the real and unreal.

Advertisement
Advertisement

But that same quality is the source of concern. As Jaron Lanier, the “founding father” of VR, wrote in his book Dawn of the New Everything, “VR will test us. It could turn out to be the evilest invention of all time.”

Bailey has concerns about privacy and surveillance, because platforms collect vast biometric data, from eye movement to breathing rates, as well as physical health, including seizures and long-term effects on vision, memory, balance, and other symptoms of what researchers call “cybersickness.” Based on what she’s observed, Bailey’s most pressing concerns, however, are mental health, addiction, disinformation, and exposure to violence.

Current research consistently demonstrates that the emotional and physiological experience of VR is distinct and powerful. When VR applications are at their best both narratively and technologically, some studies suggest that VR experiences register in our brains and bodies much as real-life experiences do, called “presence” in research terms.

Advertisement

I’ve experienced it firsthand many times. During a pre-pandemic conference sponsored by Google and the Knight Foundation, for instance, I watched the VR film After Solitary, which depicts a prison guard extracting a traumatized inmate from his cell after years of solitary confinement. After I pulled off the headset, fumbling through waves of emotion, immersion journalist Nonny de la Peña, the “godmother of VR” and one of the film’s producers, said to me, “If you’re there, you connect. You feel physically vulnerable, so you connect in a way that no other medium affords.” Scientific research suggests this is true. When I met with Jie Yin and Nastaran Arfaei, then researchers at the Harvard School of Public Health, they were working on a study comparing experiences of nature in VR and real life. They found that participants in both settings experienced reduced blood pressure, improved short-term memory, decreased negative emotions, and increased positive emotions. I sat in a swivel chair, put on the headset, and found myself overlooking a sea sparkling with reflected sunshine, a vast meadow swaying in whispered breeze. Everywhere I looked, no matter how or where or why I turned, a world alive and warm enveloped me, a horizon shimmering in refracted sunlight so articulated I reached out to touch it, my eyes wet though I hadn’t had any conscious thought of being “moved.”

Advertisement
Advertisement
Advertisement
Advertisement

But there is another side to that feeling of presence.

Jessica Outlaw, an AR and VR culture and behavior researcher, described what abuse feels like in a social VR space. Outlaw, like Bailey, is largely optimistic about the future of VR and has spent thousands of hours in social VR spaces. But she has had disturbing experiences, too. One time, she took three punches to her avatar’s face before she could pull off her headset. Her body was safe in her office, but her mind and body registered the punches as real, she says, and she reflexively threw herself into a crouch. Should I run? she then thought, cognition kicking in. How do I use the controllers to do that?

Advertisement

“I wanted to be safe like the guys in the space were and not vulnerable,” she says, “but it was a really aggressive beating.” For weeks, she experienced what she called post-traumatic “intrusive thoughts.”

This summer, one 12-year-old gave me an illustrative example of experiencing something scary in 2D versus VR. She regularly watched 2D fan videos featuring the Slender Man, often a tentacled dark forest–dwelling fictional character who stalks and kills children, especially those most afraid of him, without consequence. But ever since she had slipped on her 10-year-old friend’s headset on a dare to watch the Slender Man in VR, she has what she called a “phobia,” with a terrifying feeling of Slender Man popping up randomly and unbidden around her, particularly at night. She said, “It’s hard to sleep.”

Advertisement
Advertisement

Bailey and others in the field say we need a better understanding of the experience of “presence” in VR, for children especially, because their brains and bodies are still developing. And that it can’t come soon enough. Industry leaders, policymakers, researchers, and the public alike already are struggling to discourage, identify, moderate, and respond to harmful 2D online practices and content, to balance child safety measures against rights to privacy and free speech, and to understand the complex dynamics and root causes of spiking rates of child anxiety, depression, self-harm, suicide, and other mental health issues in the U.S. and globally.

Advertisement

Kavya Pearlman is a security professional who founded XR Safety Initiative, a not-for-profit with the aim of creating XR industry standards. (XR is a term used to capture the range of immersive technologies, including augmented reality, mixed reality, and VR.) She says, “The closest metaverse is all children’s stuff. The demographic that is most vulnerable is children.” Research suggests that children use VR “in very social ways,” and that once they become familiar with an environment, “they [want] to stay in the VR content for longer.” But risks remain largely unknown beyond 30-minute sessions or over the long term; it’s considered unethical to study extensive VR exposure in kids because the outcomes might be harmful.

Advertisement
Advertisement

Bailey is a global leader in VR research with children. For years, she’d worked with adults, studying VR’s impact on the mind-body connection, but she understood that adults were merely the test, that VR would be quickly and easily adopted by children. So she teamed up with the Sesame Workshop to investigate what might happen. Her findings, using Grover as an avatar, suggests that young children are more likely to comply with commands from a VR character than from the same character in 2D. Her work could be easily trivialized—adorable kids! Child development and wellness!—but her observations are dead serious.

“In TV,” she said when we talked, “kids would stop and hesitate and look over at me and ask, what am I supposed to do? In VR, kids never asked. They weren’t torn. They just did whatever Grover told them to do. Kids in VR never looked around. They only looked at Grover. None of the kids asked, What should I be doing? Is it OK if I do this?”

Advertisement

VR holds unprecedented promise for equity, in education in particular, Bailey said, but “I think about ads a lot.” It can be hard, for instance, for young kids to distinguish commercial content for what it is when VR is integrated into experience. “Imagine stumbling across a hate group in VR as a kid. The question of inflicting violence in this platform is important. Will that be OK? Who’s going to be monitoring these things?”

Advertisement
Advertisement

A report released in May found that 45 percent of children 9 to 12 say they use Facebook every day. A 2020 Pew parent survey found that 9 percent of children 0 to 2 years old, 25 percent of children 3 to 4, 58 percent of children 5 to 8, and 68 percent of children 9 to 11 had used gaming devices at least once a day. It is reasonable, sources say, to consider that children will use VR devices and platforms in comparable numbers. It is also reasonable that parents are concerned that their children will encounter harmful VR content. Already, some parents, many reportedly furious about Facebook’s account requirement for VR use, are pleading for parental controls for Quest 2. In the words of one, writing on an Oculus chat, Holy cow - no parental controls? Have u lost your mind? Please get that up and running. Tremendous device so much awesome potential but you cannot have kids in this without guardrails.”

Advertisement

Collaborative efforts between industry and civil society have created parental guidelines for VR, digital citizenship education, and surveillance countermeasures. But while critically important, these resources provide support for only some. Not all kids have adults in their lives who are able to teach best practices—research shows that socioeconomic and psychological factors are the biggest players in amplifying risk—and there is research suggesting that, while parental controls help, filtering software alone is not enough.

Advertisement
Advertisement

Nor, perhaps, is expecting children to navigate safety protocols on their own.

Clicking through to “Store Terms,” in addition to a 5,000-plus-word “SUPPLEMENTAL OCULUS TERMS OF SERVICE,” consumers find a “LEGAL DOCUMENTS” page containing 23 additional links to other pages, among them Oculus’ “HEALTH AND SAFETY WARNINGS,” available in a dozen-plus languages.

Advertisement

The English PDF was 12 pages long the day I bought a Quest 2. “Virtual reality is immersive and can be intense,” it read. “Frightening, violent, or anxiety provoking content can cause your body to react as if were real. Carefully choose your content and refer to provided content ratings.” The warning continued, “A comfortable virtual reality experience requires an unimpaired sense of motion and balance. Don’t play near stairs. Take a break every 30 minutes. Discontinue use immediately if you experience Seizures; Loss of awareness; Eye Strain; Eye or muscle twitching; Involuntary movements; Altered, blurred, or double vision or other visual abnormalities; Dizziness; Disorientation; Impaired balance; Impaired hand-eye coordination; Excessive sweating; Increased salivation; Nausea; Lightheadedness; Discomfort or pain in the head or eyes; Drowsiness; Fatigue; or Any symptoms similar to motion sickness.” In bold, “Just as with the symptoms people can experience after they disembark a cruise ship, symptoms of virtual reality exposure can persist and become apparent hours after use.” It also warns, “As it may increase [your] susceptibility to adverse symptoms,” you should not use the headset if you are experiencing any of the following: “Tiredness or exhaustion; Need sleep; Under the influence of alcohol or drugs; Hung-over; Have digestive problems; Under emotional stress or anxiety; or When suffering from cold, flu, headaches, migraines, or earaches.”

Advertisement
Advertisement
Advertisement

Oculus does offer comfort ratings, designed to ensure that users find content appropriate to their needs. The webpage explaining the ratings reads:

If you’re new to VR, we recommend starting with content that’s rated Comfortable before trying Moderate, Intense or Unrated. Comfortable experiences are appropriate for most people, although this rating doesn’t mean that an experience is going to be comfortable for everyone. … Moderate experiences are appropriate for many but certainly not everyone. … Intense experiences aren’t appropriate for most people, especially those who are new to VR. … Unrated experiences may contain intense content, which may not be right for most people, especially those who are new to VR.

Conversations about the risks of new technologies are often dismissed or ridiculed as “moral panic.” Sometimes that dismissal is warranted. The people I spoke with were quick to acknowledge the important and complex question of how to balance child protection with privacy and freedom of expression. They were quick to recognize the critical importance of children’s digital rights to privacy, agency, access. And they were careful to highlight the benefits of online engagement and free play—which should ideally, in the words of child safety expert Sonia Livingstone, be “open-ended, emotionally resonant, intrinsically motivated, voluntary, social, stimulating, imaginative, diverse.” The experts also acknowledged the long history of alarmist predictions of technology-fueled societal collapse, and that informed, empowered, and safe use often accompanies mainstream adoption of new digital technologies—something Livingstone has chronicled.

Advertisement
Advertisement
Advertisement

Researchers understand the relative scarcity of definitive research and consensus on VR violence, social compliance, or addiction. They are also deeply aware that sectors dedicated to child care, and the people who work in them, suffer from long-standing economic and cultural marginalization. This lack of evidence, visibility, status, and power might override and obfuscate reasonable questions and concerns, but it doesn’t eliminate or neutralize them. Research on risks to children should be an essential part of the design process but too often isn’t.

Advertisement

“A child rights impact assessment is an absolutely reasonable requirement to put on profitable companies,” said Livingstone. “It should be done well before you release a product, and it should be made public. It’s amazing when trust and safety teams say they don’t have enough expertise relating to child risks and rights.”

Advertisement

Companies cite high percentages of takedowns—that is, removing most content that violates guidelines—as evidence of their commitments to safety and security. How, if at all, will they recalibrate this measure of success to reflect the profound difference between the human experience of 2D and the human experience of immersive environments? What are the consequences of blurring children’s visions of reality, when it comes to advertising, disinformation, or recruitment campaigns, the latest self-harm challenges, perpetration of abuse?

Advertisement

Zuckerberg himself acknowledged potential VR-specific hazards ahead. “One of the big issues that I think people need to think through is right now there’s a pretty meaningful gender skew,” he said in his July interview, “at least in virtual reality, where there’s a lot more men than women. And in some cases that leads to harassment.”

Advertisement

While traditional gaming is fairly evenly split—an estimated 54 percent male, 46 percent female—upward of an estimated 86 percent of VR owners are male. As a Black woman, Bailey says, “When I tell people I do this work, they’re very surprised.”

In July, on the question of how Facebook will moderate the metaverse, Zuckerberg described Facebook’s current staffing and practices, saying, “[T]hat kind of apparatus that we built up I think will carry naturally to all the work that we’ll do going forward.”

UCLA’s Sarah T. Roberts, author of Behind the Screen and a leading scholar in commercial content moderation, is wary. “We have evidence that people can’t behave in these spaces,” she said when we talked. “What assurance do we have that a more immersive space won’t exacerbate problems that have yet to be solved? Where are the mechanisms to press the brakes if need be? It’s not like these companies don’t have a track record. … [A] multidimensional space will present exponential greater difficulty.”

Advertisement
Advertisement
Advertisement
Advertisement

Social VR is dynamic, instantaneous, simultaneously decontextualized and embodied. It is not static. It is not media. It explodes legal, and social, separations between identity and privacy.

But Zuckerberg turned to the concrete for comparison. “[I]t’s a little bit like fighting crime in a city,” he said. “The police department’s goal is not to make it so that if there’s any crime that happens, that you say that the police department is failing. That’s not reasonable. I think, instead, what we generally expect is that the integrity systems, the police departments, if you will, will do a good job of helping to deter and catch the bad thing … and keep it at a minimum, and keep driving the trend in a positive direction and be in front of other issues too. So we’re going to do that here.”

Advertisement

It’s a troubling analogy. Police officers operate under a vast battery of laws and are hired and fired and held accountable by governments and the public—a broken, fraught system that is disproportionately deadly and dangerous, for Black people especially. The comparison raises important technology-specific questions, says Roberts. “If moderators are now cops, are they going to have a union and be paid [commensurately]? Keeping the peace in a city is a serious job.” Roberts paused, then said, “I love tech and gadgetry as much as the next person. But there are some real costs to consider. We have a responsibility to step back and methodically and carefully evaluate both risk and reward.”

This is why Bailey is dedicating her research to understanding “how we can use VR to help build positive, healthy social connections” and to developing “child-centric VR environments that create safe spaces for children.”

Advertisement

No one can say for sure whether VR has finally gone mainstream, but, as evidenced by Frances Haugen’s congressional testimony and coverage of the documents she leaked, conversations about balancing safety and security with privacy and freedom of expression certainly have.

Advertisement

Increasingly, policymakers and the public are asking why Silicon Valley’s biggest companies are not held to similar federal risk assessment protocols as other industry sectors, especially as their lobbying dollars match and even exceed those of oil, banking, and pharmaceuticals. In July, Rep. Kathy Castor, a Democrat from Florida, reintroduced the Kids PRIVCY Act, an update to the Children’s Online Privacy Protection Act, which passed in Congress in 1998. The U.K. recently enacted a design code that would require companies to comply with a set of standards to safeguard the privacy of potential underage users. In late August, two days before the U.K.’s design code went into effect, Instagram started requiring users to enter their birthdate. Six days before, Facebook Reality Labs announced a round of AR/VR Responsible Innovation proposals, funding research on privacy, accessibility, and ethics, among other areas.

Advertisement
Advertisement
Advertisement

Kids, growing and changing, are a significant moving target for researchers, and industry’s practice of keeping its own research and findings largely under wraps also complicates the picture. But sources are cautiously optimistic that there is more research to come. The U.S. National Institutes of Health has dedicated $300 million to study some 11,000 9- and 10-year-olds, to investigate whether media use, including VR, influences brain and cognitive development. A bipartisan group in the Senate recently reintroduced the Children and Media Research Advancement Act, which, if passed, would add NIH funds to study infants, children, and adolescents “on the cognitive, physical, and socio-emotional development effects of their exposure to and use of media.”

Maybe VR is always just around the bend. And maybe not. Maybe asking, “When will VR arrive?” is no longer enough—because VR already has arrived for some. Whether that’s thousands, hundreds, or only a couple dozen kids playing and creating in VR, says Bailey, “I want to make sure I’m doing right by families and children.” For Bailey, Pearlman, Roberts, Outlaw, Livingstone, and many others around the world, there is no better time than the present to make sure emerging VR technologies are as safe, responsive, inclusive, and human-centered as possible.

Advertisement
Advertisement
Advertisement

Bailey knows that children’s identities, histories, homes, use patterns, and other factors all contribute to the risk of online harm, and that LGBTQ children and people of color are disproportionately at risk of harm, particularly Black women and girls, even as they have been on the leading edge of reporting and chronicling bias, abuse, and long-standing failures of safety and security practices, a history chronicled by writer and cultural critic Sydette Harry in her important 2021 Wired essay, “Listening to Black Women.”

That’s why she is increasingly conducting her research in local schools and communities. Bailey’s goal is “to find underrepresented communities and understand how they might use this tech,” and to create a better understanding of how children distinguish real from unreal and why they are more likely to comply with certain types of characters than others.

Advertisement

In some of her recent research, conducted with an inclusive group of children, ages 5 to 9, from central Texas, and titled, “I’m in His Belly,” after the words of one child, she found that “despite being told that the characters were virtual, the two most common spontaneous behaviors children engaged in were attempting to touch the characters and trying to look inside them.”

Advertisement
Advertisement

When one child discovered he couldn’t see his own body, he stopped dead in his tracks. “I’m a ghost!” he shouted. “I can’t see myself!”

Current Common Sense Media parent guidelines recommend individual VR sessions no longer than 20 minutes. Bailey, who knows as much or more than anybody in the world right now about VR and kids, maxes her sessions out at 10 minutes.

Even before the pandemic, Bailey was seeing more and more kids coming into her lab saying, “Oh, I’ve used this before!” And also, “I could do this forever.”

“They’re the ones who are most vulnerable,” she says, “who will spin around and crash and beg for more time. Those are the ones you have to watch.”

Update, Oct. 11, 2021: This piece was updated to clarify Sarah T. Roberts’ response to Mark Zuckerberg comparing content moderators to police officers.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement