Sheep have long had a fraught relationship with emerging technology. Theirs is a story most famously emblematized by Dolly, whose cloned origins inspired both celebration and handwringing in the mid-1990s. Soon, however, a new herd of Ovis aries may equal her in fame—not, perhaps, because they exhibit technology’s transformative potential, but because, in their own sheepish way, they dramatize its limitations.
The story starts at University of Cambridge, where a team of researchers set out to determine whether sheep could recognize (and distinguish between) distinct human faces from photographs alone. As the scientists write in a paper published this week in the journal Royal Society Open Science, we have known for some time that these animals can identify members of their own flocks, and even humans with whom they are familiar. But could they maintain those skills when studying two-dimensional images?
To find out, the researchers led eight sheep through a battery of training tests. Each animal moved at its own pace around what the paper describes as a “maze.” (It is not a term that reflects well on ovine cognition: An accompanying diagram suggests, this “maze,” also described as a “one-way ambulatory circuit,” was really just a sort of loop with gates.) At the course’s midpoint, the presumably placid animals were confronted with two computer screens. In the initial phase of the experiment, one of these screens was blank while the other showed one of four celebrity faces (more on that in a moment).
If the test subject selected the screen with the celebrity, it would receive a food reward. After a few sessions, the sheep quickly improved at this task, selecting the human face almost 90 percent of the time. They showed similar progress in the second phase of the experiment, which tried to train the animals to select the already-familiar celebrity faces instead of images of objects such as a lantern and “an American football helmet.” After a few sessions, they had again improved considerably, going from an initial 56.6 percent celebrity selection rate to a more formidable 87.5 percent. The third stage was arguably the most impressive: Here, the sheep ultimately showed that they could distinguish between an already familiar celebrity face and a picture of a human they’d never seen before 79.3 percent of the time. While I don’t have the data to back it up, I suspect this is slightly better than my own hit rate when I’m flipping through a copy of Us Weekly.
At this point, you can imagine the researchers (who did not respond to an email I sent them) celebrating, but they weren’t done yet! Subsequent testing suggested that the sheep could identify photographs of their human keepers 71.8 percent of the time. Perhaps even more impressively, they could still select faces of the celebrities they’d gotten to know when they were shown from different angles. To be fair, their hit rate for these “tilted faces” wasn’t nearly as high as the other results (it averaged out to 68 percent), but it was still, the researchers claim, “significantly above chance.”
Here, you likely have many questions, most of all, I’m sure, the one I asked myself as I read these findings: Which celebrity do sheep like best? Our ability to answer this query is, of course, limited by two factors: First, this isn’t what the study was actually examining, and it’s ludicrous of me to apply the anthropomorphizing verb like to these experimental animals. Second, the sample size of celebrities was even smaller than the herd of sheep. They were, in fact, only trained on four images: Barack Obama, Jake Gyllenhaal, Emma Watson, and Fiona Bruce.*
If that last name leaves you baffled, never fear—the sheep also didn’t seem to recognize her all that consistently. Across many of the training sessions, the sheep typically selected her slightly less frequently than they did her more globally famous colleagues. (Bruce is, for the record, a British newsreader.) And though the other results varied from one session to the next, one celebrity seems to have caught the subjects’ eyes ever so slightly more often: Emma Watson.
It is at this juncture that things get really interesting—so long as you have a flexible definition of what counts as “interesting.” The images used in the study had been acquired through Google. But when I ran those same images of Emma Watson back through Google’s reverse image search, the vaunted engine struggled to identify them. Given a picture of Watson smirking directly at the camera, it “guessed” I had shown it an example of “eye color studio screenshot 2.” Good guess, Google! Similar searches on the two “tilted” photographs of Watson yielded less specific, but no less strange, replies: One, it proposed, was a photograph of “Christmas Day” while the other was supposedly just “lip.”
These results are, of course, inconclusive, not least of all because the images seem to have been altered slightly for standardization in the study. What’s more, I further altered their size when I extracted them from the paper, which may have confused things. Still, I’m heartened ever so slightly by the possibility that sheep (sheep!!) are better at consistently recognizing Emma Watson than this one public-facing product of (arguably) the most powerful company on the planet. Maybe there’s something special about Watson herself (her publicist was out of the office and did not respond to an email request for comment), or maybe, just maybe, we can chalk this one up as a triumph of biological intelligence.
This is not, I assume, the most important conclusion we should draw from this study. The paper’s authors suggest, among other things, that their findings might open new pathways in the study of neurodegenerative disorders. That’s an exciting possibility, and I don’t mean to minimize it. Nevertheless, in this digital age, we sheeple should celebrate our victories over our algorithmic rulers, however small they may be.
*Correction, Nov. 8, 2017: This post originally misspelled Barack Obama’s first name.