Ever wonder if Facebook is reading your posts? Well, it is—or, its computers are, at least. And if you say the wrong thing, you could be locked up.
That’s the takeaway from a recent Reuters article, which recounted a case in which Facebook’s software detected a man in his thirties allegedly trying to set up a meeting with a 13-year-old Florida girl for sex. From Reuters:
Facebook’s extensive but little-discussed technology for scanning postings and chats for criminal activity automatically flagged the conversation for employees, who read it and quickly called police.
Officers took control of the teenager’s computer and arrested the man the next day, said Special Agent Supervisor Jeffrey Duncan of the Florida Department of Law Enforcement. The alleged predator has pleaded not guilty to multiple charges of soliciting a minor.
Facebook’s chief security officer told Reuters that the company’s monitoring software uses actual chats that led to sexual assaults to predict when another might occur. This is eerily similar to the hypothetical software I discussed in an article last month on whether police could arrest people based on suspicious-looking Google searches. I noted in the piece that while the idea might sound far-fetched, the technology already exists, and it might even be legal.
In Facebook’s case, the scanning hasn’t stirred outrage—probably because it seems to be focused on catching sexual predators. There are two reasons why online predators make sense as an initial target for automatic-monitoring algorithms. First, soliciting sex with a minor on the Internet is a crime in itself, not just a prelude to a crime (like, say, searching Google for ways to murder someone in their sleep). And second, sexual predators are unlikely to elicit much sympathy, so the public is more likely to tolerate intrusive means of nabbing them. Facebook is fighting creepy with creepy.
The key to the technology’s success—from a public-opinion standpoint, and possibly from a legal standpoint—is avoiding false positives. Arresting an innocent person based on a Facebook chat would surely cause controversy. So according to the Reuters piece, Facebook dials down the algorithm’s sensitivity, to minimize the chances of this happening.
It seems clear that this technology has the potential to do some good. But that shouldn’t blind us to the fact that it represents a further erosion of our online privacy, one more serious than selling our personal information to advertisers.