There is a wave of concern—completely justified, to my mind—over the privacy implications of our increasing reliance on Facebook and Google. What most people don’t realize, however, is that these issues are dwarfed by the potential for privacy invasion that’s presented by a seemingly innocuous platform: video games.
Consider the Kinect, the Microsoft console that sold 8 million units in its first 60 days of release. This inexpensive, book-sized panel has the ability to create a realistic, virtual likeness of the player. In doing so, it creates a delightful interface to play games—instead of hitting a button to kick a ball, you kick your foot and the digital character on screen mimics your movements. How does the Kinect produce this dazzling immersive experience? By capturing every move you make.
People choose to post personal information on Facebook and Google. Game platforms like the Kinect, by contrast, continuously observe your nonverbal behavior. Movements and gestures may seem harmless to share with others, but decades of psychological research demonstrate that the way you move is more revealing than what you say.
More than 40 years ago, UCLA professor Albert Mehrabian demonstrated that nonverbal behavior constitutes a majority of face-to-face communication. More importantly, nonverbal behavior is automatic. Though we can all watch what we say, very few of us can consistently regulate our subtle movements and gestures.
Though it’s designed for gaming, the Kinect can be modified to track other behaviors as well. As such, scientists throughout the world, including my team at Stanford’s Virtual Human Interaction Lab, are starting to study what the Kinect and other gaming systems reveal about you. I’ve been studying digital footprints—the behavioral residues left behind in video games and other virtual worlds—for almost 10 years. During that time I have collaborated with Fortune 500 executives, military officials, and educators. Our ability to automatically classify someone’s nonverbal cues—the essence of their identity, psychological state, and behavior—could prove extremely beneficial. It’s also not to be taken lightly.
In my lab, a team working with Konica Minolta has developed a system to automatically detect learning. In one study we used the Kinect to gather nonverbal data during one-on-one, student-teacher interactions. Later, we used that data to predict the students’ test scores. The early results are encouraging, though preliminary, with around 10 movements (those relating to the shoulder and elbow, for instance) being the most predictive. What makes this type of experiment so powerful is the “bottom up” nature of the research. Instead of looking for specific known gestures, like nodding or pointing, we can mathematically uncover subtle movement patterns, many of which would not be noticed by the human eye. Just imagine if teachers, based on a small sample of their students’ nonverbal behavior, could instantly detect which students needed extra attention or specialized assignments.
Scientists at the University of Southern California have already used devices much simpler than the Kinect to diagnose classroom behavior. By looking at nothing more than the direction a child’s head was pointed—a snap for a system like the Kinect—Skip Rizzo and his colleagues were able to detect hyperactivity disorders like ADHD. A system examining nonverbal behavior in real time could use this data to automatically diagnose kids in school.
Other scientists have developed applications for detecting behavior in the home. Jaeyong Sung and his colleagues at Cornell University are looking at whether a Kinect could assist people by recognizing their activities—automatically detecting, for example, if someone is brushing her teeth, cooking, or opening a pill container. The models were highly accurate (84 percent) at categorizing behavior for people who the system had seen previously. But even more impressive was how it categorized new people. If someone new visits your home and walks past the video-game console, it still recognized that person’s behavior 64 percent of the time.
And over the past eight years, in experiments my colleagues and I conducted for car companies like Toyota and Nissan, we used tracking software to detect hundreds of subjects’ facial movements as they cruised through virtual streets in a simulator. We then built mathematical formulas to predict what we called “the pre-accident face”—the nonverbal pattern that occurs seconds before the driver exhibits bad behavior, including swerving, lane violations, and even collisions. The two most predictive facial features for major accidents: the center of the lower lip and the center of the upper lip.
The automobile companies funded this research in the hopes of incorporating in-car solutions that can prevent crashes. For all the potential benefits of this research, though, it would also be possible to use it in more controversial ways. Just imagine the debate that would ensue if insurance companies were granted access to our nonverbal driving histories, allowing them to charge higher rates to drivers who made telltale expressions. Also consider a project we worked on at the behest of Kyoto-based factory automation company OMRON. With a single camera, we demonstrated that some workplace mistakes can be detected before they occur simply by examining the workers’ facial movements. Again, while tracking nonverbal behavior may help prevent accidents, it could also clue in employers about personal habits that workers might not want to share.
I believe that we’ll see many wonderful applications of this technology, ranging from safety systems to educational tools for struggling students. At the same time, gamers need to be informed that they can be watched, and that how you interact with a game system like the Kinect can potentially reveal a lot about you. As technology becomes more immersive, your video-game persona is not just a character. It’s you.