This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. On Thursday, Sept. 28, at 9 a.m., Future Tense will hold an event in Washington, D.C., on mental health and technology. For more information and to RSVP, visit the New America website.
Ever since she announced her platform, Melania Trump’s first lady campaign to combat cyberbullying has been greeted with unsurprising howls of derision—largely due to the Twitter antics of her husband.
But even if she weren’t married to the cyberbully in chief, her plans would have been met with extremely predictable reactions. Some people would have cheered an effort to stop technology from harming children. Others would have rolled their eyes, minimizing both the problem of cyberbullying and efforts to fight it.
This sort of split in the debate runs right through mental health tech, by which I mean the overlap between technology (smartphones and social media, mostly) and all aspects of mental health. It’s a growing area of research, one with a great deal of contradictory arguments and studies. The biggest split is between those who say technology (particularly social media) is doing irreparable damage to mental health, and those who say Big Data could help us fix psychological problems.
The examples abound. There is plenty of research linking disorder eating with social media usage. But there is also work that mines social media content and attempts to “quantify and predict” users’ experiences of eating disorders. There’s research linking anxiety with social media usage. But there’s also research trying to predict anxiety from Facebook profiles. Then there’s depression. Research shows social media use is significantly associated with increased depression. But there are also research teams using social media content to predict when depression will occur.
If I wear a fitness tracker, can its data indicate when I become ill naturally? Quite probably—it might show disturbed sleeping patterns or changes in heart rate. But if that device itself somehow actually makes me ill, what use does it have as a tracker? Very little. Though if the device did make me ill, then surely I’d stop using it? It can’t be both, can it?
What we are dealing with here is something like the infamous snark/smarm debate. It’s also a little bit like C.P. Snow’s The Two Cultures, except it’s not so much arts versus science but something like social science versus data science. I should note that Snow said that dividing things into two should be regarded with suspicion. So what follows is, as Snow would put it, “a little more than a dashing metaphor, a good deal less than a cultural map.”
So, to try to sort the wood from the trees in mental health tech, let me introduce you to the Lovejoys and the Frinks. (If you haven’t watched The Simpsons, a good deal of what follows may be lost on you. Sorry!)
Helen Lovejoy is the judgmental, moralistic, gossipy wife of the Rev. Lovejoy, famous for her catchphrase, “Won’t somebody please think of the children!?” Professor Frink is the condescending nerdy scientist whose bizarre inventions try to help solve problems, but usually only make things worse.
I see echoes of both of these characters right through public and academic discourse with regard to mental health and technology. Not everyone in mental health tech is a Frink or a Lovejoy, of course. Sometimes people swap roles. These are just the features of the discourse, such as it pitifully is, and it’s useful to consider them.
Lovejoys say things like “let kids be kids” and “just turn the darn thing off.” Lovejoys love those Minions memes you see on Facebook that say, “in my day we played outside.” Lovejoys write headlines like “smartphones are destroying a generation” and “social media is making millennials narcissistic.” Lovejoys do digital detoxes, live in the moment, and never take videos at concerts. Lovejoys are outraged at how technology is destroying human psychology, and they are determined to draw more attention to that outrage.
But Lovejoys don’t have much of a plan to fix any of these alleged problems. The Lovejoy goes on television to say that parents are worried about the effects of technology on their children, and then proceeds to worry them some more. Helen Lovejoy wants you to buy her book.
On the other hand, we have the Frinks. It’s not that Frinks don’t care about mental health issues, it’s just that they think they can fix them with technology. Frinks feel that Lovejoys want to take away their freedoms/toys. Frinks say don’t worry about Big Brother: They say just “use Signal, use Tor.” Frinks say “the problem lies between the keyboard and the chair.” Frinks whine about Facebook on Twitter. Frinks are tech evangelists and Silicon Valley cheerleaders.
In the context of mental health tech, Frinks say that “kids will be kids.” Frinks often pontificate that historically there is always a “moral panic” over new technology and that things will “normalize” over time. Frinks say we shouldn’t be so stupid as to fall for clickbait and sensationalism.
But at least Frinks have solutions. Frink says that whatever the issue is, it can be modeled and fixed with machine learning. Frink loves algorithms, big data, and intelligent bots. Frink is dataism, scientism, and positivism. Frink has an app for that.
The thing is, Lovejoy is a bullshitter. But so is Frink. As Harry Frankfurt said, “bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about.” When it comes to mental health and technology context, the established knowledge base is so shallow no one really knows what they are talking about.
The Lovejoyers’ thesis says that technology is having a negative effect on our mental health. This of course raises the obvious question as to why we continue to use such technologies. And the Frinkers’ thesis is that technology behavioral patterns (likes, metadata, etc.) accurately reflect users’ psychology and thus perhaps should be used to treat mental health problems. This is of course challenged by the fact that many technology platforms exist solely to induce users to engage in new behaviors (such as click on ads).
Note that these theses are more than contradictory: They are totally different worldviews. And recognizing how much research simply talks past the rest of it will hopefully help us reconcile them, somehow.
Because far too much of this is correlational research, and there so many other factors involved too. Sometimes social media usage decreases depression. Some research says that cyberbullying is extremely rare. And researchers have concluded that they do not yet understand whether anxiety causes more social media usage or the other way around. Moreover, in the data-mining research, there is seldom any appreciation of the studies where technology has been linked to the very mental health problems they are trying to model. To paraphrase Homer Simpson, “Here’s to technology: the cause of, and solution to, all of our mental health problems.”
And there are many more issues that are even harder to define. Look at internet addiction, which features prominently in the most recent issue of the research journal Addiction. The debate surrounding internet addiction is practically as old as the World Wide Web itself, beginning 21 years ago. And don’t get me started on screen-time. Even “mental health” is so loaded with preconceived meanings I wince every time I type it. And of course, there are a myriad of privacy and surveillance concerns around social media data-mining research.
Research in this area might not be actually cartoonish, but it is in a poor state. We have far too much correlational methods with causational headlines. We need social media companies to stop throttling access to data. We need publishers to move to registered reports and open data. We need more interdisciplinary, longitudinal, controlled studies. We need more qualitative and critical studies. We need to stop demanding simple explanations of complex issues. We need broadcasters to invest in smarter science communications. We need less Lovejoy and less Frink.