Future Tense

Facebook Plans to Use A.I. to Start Fact-Checking Photos and Videos

Facebook CEO Mark Zuckerberg speaks during a press conference in Paris.
Facebook CEO Mark Zuckerberg speaks during a press conference in Paris.
BERTRAND GUAY/Getty Images

In an ongoing effort to fight misinformation and election meddling, Facebook is expanding its fact-checking program to include photos and videos. All 23 of its third-party fact-checking partners across 17 countries are on board.

In a press release on Thursday, Facebook says that most of its previous efforts have been focused on reviewing articles. But as the saying goes, a picture is worth a thousand words—especially if it’s deceiving.

“We know that this kind of sharing is particularly compelling because it’s visual,” Facebook product manager Antonia Woodford said in the press release. “That said, it also creates an easy opportunity for manipulation by bad actors.”

So how does it all work? First, Facebook’s machine learning model will aggregate feedback from users to identify false content. Then, these photo or video will be sent to fact-checkers for review. False photos and videos will be divided into three categories: 1) manipulated or fabricated (like deep fakes), 2) out of context, and 3) text and audio claim—that is, false information in the form of text in a photo or spoken words announced by someone in the video.

Facebook gives examples of the three categories. For "Manipulated or fabricated," it presents an image from the Mexican presidential election. For "Out of context," there is a false meme claiming that a young girl from Syria is a crisis actor. For "Text or audio claim," there is a caption on a meme about India's president's alleged corruption.
Examples of the three categories of false videos and images
Facebook

Once they’ve been sent to a human, photos and videos are evaluated with reverse image searching, which can reveal several things: the source, where they’re being re-used, and whether the original image or video has been altered. Fact-checkers can also analyze image metadata, which shows when and where the photo or video was taken.

In many cases, the text that accompanies images/videos are as important as the images themselves. Facebook says it’s developing optical character recognition to compare text from photos and videos to headlines from articles. That way, fact-checkers can identify text that contains hate speech or false information.

Once identified as false, the flagged content will show up less frequently on Facebook’s News Feed, and a fact check will be added in the Related Articles section to further contextualize the content. The accuracy of the machine learning model will constantly be improved as fact-checkers rate the photos and videos the system flags.

However, some have recently grown wary of who’s behind the fact-checking. On Tuesday, the left-leaning news site ThinkProgress argued that one of its article was being wrongfully censored on Facebook by its conservative fact-checking partner the Weekly Standard after a dispute over the headline, “Brett Kavanaugh said he would kill Roe v. Wade last week and almost no one noticed.” In the story, writer Ian Millhiser argues that Supreme Court nominee Kavanaugh’s testimony, in combination with his previous speeches, implies he would vote to overturn Roe v. Wade. In a literal sense, Kavanaugh never said that. The Weekly Standard flagged the article as false, significantly reducing its presence on Facebook.

Whether the Weekly Standard was right or wrong, the example hints at what kind of problems Facebook, which regularly angers people on both the left and the right, may encounter as it puts greater power in the hands of its fact-checkers.