Future Tense

How Can an A.I. Develop Taste?

A drawing of a boot-shaped mug with a separate mug can be seen on a napkin.
Natalie Matthews-Ramo

Kate Compton, an expert in artificial intelligence, responds to Holli Mintzer’s “Legal Salvage.”

I’ve begun collecting vintage brooches. I started after reading a theory that Queen Elizabeth was communicating secret political shade through her choice of accessories. They also reminded me of my grandmother, a woman with that refined 1950s hostess style that I learned to associate with being an adult. I can wear one to feel like the sort of formidable grand dame that I imagine myself growing into as I age. Each one has a tarotlike network of personal meanings, based on age and style and provenance, allowing me to walk into a meeting with a stylish accessory that simultaneously acts as a secret declaration of my intentions. (I have several for friendship, one for lying, and one for ruthlessness.) My collection is also pleasurably tactile—I can open the treasure chest and watch my hoard glitter or stroke the gems. And at $20 per vintage brooch, they are a cheap indulgence in a stressful time.

As humans, our possessions mean many different things to us. Their value may be practical. We need a blender to make smoothies and a bike to get to work on time. But many objects also have sentimental value and hook into the complex web of human emotions and relationships. We may have aspirational objects that tell us who we want to be (someone who goes camping more, exercises more, would wear those impractical shoes). We also keep nostalgic objects that remind us, through memory or our senses, of people or values that we want to remember. Sometimes our collections simply “spark joy” (in Marie Kondo’s words) in some unknowable way.

In “Legal Salvage,” we meet three collectors: Mika, Ash, and Roz. We also learn about people who abandoned power tools or neon signs or commemorative saltshakers in their storage lockers. We don’t know what these objects meant to the vanished collectors. Were these treasured keepsakes to be kept safe? An unwanted inheritance? The final resting place of a failed business or broken home? We know more about Mika, Ash, and Roz because they tell us what they like. Ash likes band shirts and records; Mika likes historical replicas and small precious things; Roz enjoys bright colors. They all want quick-selling objects to stock their online shops. And Roz is an A.I.

Well, Roz is an A.I., and also simultaneously a corporate email help-desk, Mika’s friend, and a forklift, because like many of our real-world A.I.s, she’s a network made up of a few physical devices (phones and robots) and an unknown number of distributed digital programs. We can imagine that Mika and Ash get their personal tastes from their past experiences, and their professional taste from sales experience. But how does Roz know what she likes?

It’s not surprising that Roz could have opinions about what could sell in her shop: Sales optimization is one of the most lucrative and ubiquitous forms of A.I. in our modern lives. Facebook has trained a neural network to identify many kinds of used goods, and at Thredup, an A.I. sorts, tags, and prices used clothing to optimize profit. These systems can look at a photo of a shirt and calculate some useful facts about its style and color, even though they can’t physically sift through a pile of old clothes. (Manipulating fabric is still one of the hardest A.I. problems!) These systems are intended to be low-bias statistical processors for turning a grid of pixel data into an output, like “There is a 70 percent chance that is a green polo shirt.” But aside from the pixel data, they don’t know what “green” or “polo” is, or even what a “shirt” is for.

So what is taste and how might Roz (or any A.I.) develop “a personal style” without needing to be a person? A.I. practitioners have tried many approaches to making systems that “want” particular things, as long as we’re willing to measure “want” by what the system turns toward. This isn’t a wholly strange way to define the desires of an unknowable alien mind. I might say a plant “seeks” sunlight because it turns its leaves upward, or that water “wants” to flow downhill, or that my cat has a “favorite” toy. Our futuristic-sounding word cybernetics came from an ancient Greek word meaning “the person who steers a ship”, and it means just that: A cybernetic system is one that is “steered” by something. A 1984 thought experiment by Valentino Braitenberg proposed tiny physical robots that are steered by two photo sensors controlling two wheels, and no brain in between. (You can even build them!) There are only four ways to hook up such a simple configuration, and each has a different personality based on its relationship with light: “Love” would drive toward a light source and stay there forever, “Hate” would charge through the light into the darkness, “Fear” would steer away from any light, and “Curiosity” would approach a light only to veer off in search of new lights. It doesn’t take a mind for an A.I. to have preferences!

In 2001, Rob Saunders’ project “The Digital Clockwork Muse“ created another set of A.I.s with preferences, in the form of a virtual community of simulated artists. Each tiny artistic agent had a different way to make tiny abstract pictures. Each agent also had a way to interpret art that it saw, a “self-organizing map” that could define its own categories of similar-seeming art. Each time an agent saw a new piece of art created by another agent, it would add the art to its map, cementing categories that it had seen many examples of and shifting categories as it encountered something new. This meant that each agent had its own personal calculation for “novelty” based on its own experiences, like we do. And like human art critics, they “liked” art that was a little bit novel but not so unfamiliar that they couldn’t fit it into their categorization at all. This principle is called the “hedonic curve”: We aren’t excited by things we’ve seen before, but we also don’t enjoy things that are so strange that we can’t relate them to our past experiences.

We develop sentimental relationships with the objects in our lives because they become entangled with our experiences, our relationships, our memories, and our values. The objects that were important in our past often establish our expectations and preferences for future objects. But an A.I. will have a very nonhuman view of the objects it encounters, and its “memory” of objects will be drastically different than ours. The A.I. of Google’s DeepDream project, when trained on nothing by dog pictures, hallucinated dogs onto a blank page. But it doesn’t know dogs as living, barking three-dimensional animals—it knows them as statistically significant patterns of pixels, so its impressionistic rendering of “dog”-ness is a famously unsettling pattern of writhing noses and legs.

In “Legal Salvage,” Roz has worked alongside Mika and Ash for several weeks. Like our modern A.I.s, she has been given new training data each time they hold up a plate and say “midcentury Fiestaware, very collectible” or “look at this awful paisley tie,” so she may be creating new categorizations in her program for “ ’70s,” “retro,” “fine,” “collectible,” and even “what Mika likes.” Some of her objects might have sentimental connections to her experiences with Ash and Mika. But her experiences are not human experiences, and she will look at objects very differently than the humans do, and she might have developed some very alien preferences based on her own unique way of seeing the world through remote cameras and graspers.

Roz may have a complicated relationship with uniqueness and age, two things that often determine the value of vintage objects. In The Work of Art in the Age of Mechanical Reproduction, essayist Walter Benjamin looks around his world in 1935, with its mass-printed books, perfectly reproduced photographs, and printed copies of art, and wonders how, when the Mona Lisa can be simultaneously in a museum and on a million postcards, “art” can survive without “its unique existence at the place where it happens to be.” He calls this the “aura,” the specialness of an object that can’t be reproduced and is fundamentally linked with its physical form and its unique journey through time. The cheap print of the Mona Lisa lacks “aura.” That is, until one particular postcard from a memorable vacation becomes a part of someone’s life, and that card, unlike all the others, gains its own aura of uniqueness, nostalgia, and time. Reproduced objects start out with no aura but gain it as they interact with our messy emotional lives. Mika notices the absence of this when she looks at the 3D-printed trappings of her world: “reprocessed and printed, again and again with no memory of the forms they’d once taken.” These objects aren’t sentimental—they are too ephemeral and indistinguishable to have a relationship with the humans who own them or the world that produced them. Uniqueness, though, is in the eye of the beholder: Mika can hold an object in her hand and feel its uniqueness, but Roz’s perception of the world comes through photos, and to her, a photo of the object is the object, and not unique at all.

Roz herself might be as perfectly reproducible and reconfigurable as a filament-printed chair. A.I. systems are unlike any minds we are familiar with, in that they can be endlessly and uniformly reproduced. While Roz fears being deleted, she doesn’t mention that she could just as easily be copied into a million identical A.I. backups (and as corporate software, probably was). Like Benjamin faced with a Xerox machine, we don’t yet have stories about how to deal with a mind in an age of mechanical reproduction. Does Roz have “aura”? Is she unique and collectible, or a commodity? Like the ’70s-reproduction-of-Victorian-era-Roman-revival necklace that Mika finds, Roz is a machine reproduction of a human mind. But she is also part of humans’ lives, Ash’s and Mika’s and her future customers’, and so becomes something unique, a collectible sentimental object that collects sentimental objects.

This story and essay, and the accompanying art, are presented by AI Policy Futures , which investigates science fiction narratives for policy insights about artificial intelligence. AI Policy Futures is a joint project of the Center for Science and the Imagination at Arizona State University and the Open Technology Institute at New America, and is supported by the William and Flora Hewlett Foundation and Google.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.