Fake news is getting the AI treatment.
According to a Reuters report, Facebook Inc. (FB) intends to use artificial intelligence to solve the fake news problem on its network. The Menlo Park-based company’s director of applied learning told reporters that it’s algorithm is designed to detect content that violates the company’s guidelines. For example, it can detect violence and nudity in video streams. The technology was already used by Facebook earlier this year to weed out violent content from its network and stem the flow of extremist videos. It is currently being tested on Facebook Live, the company’s video platform that enables users to stream video content. (See also: Fake News A Real Problem Among Teens).
Facebook has come under fire from media organizations that blame the social network for spreading fake news, or news that fudged facts and quotes, that supposedly helped elect current President-elect Donald Trump. An analytics firm reported that over 70% of traffic to fake news sites on desktop was generated from Facebook referrals. Established publications, such as the New York Times, received only 30% of their traffic from the network. In sum, the social network was responsible for directing 50% of their total hits for fake news websites. The corresponding number for established publications was 20%. (See also: Fake News Ban Won't Hurt Facebook's Financials).
The company has already unveiled a series of initiatives aimed at tackling the problem. Along with Alphabet Inc. subsidiary Google (GOOG), Facebook has also vowed to stop the flow of advertising dollars to fake news sites. A post on tech publication TechCrunch last month claimed that the company had tested two options to deal with the content problem back in May. The first one, which was a hoax detector based on user reports, involved human monitoring. The second one involved artificial intelligence. Given the size and complexity of the social network, it opted for an algorithmic approach to detect content that does not meet its standards.
The discussion about fake news within the company has also taken on a philosophical bent. At the reporter meeting yesterday, Yann LeCun, Facebook’s chief research scientist, asked whether it made sense to deploy the technology. This is because it would hamper user experience by limiting expression of free speech on the network. CEO Mark Zuckerberg has also weighed in and said that Facebook should be cautious “about becoming arbiters of truth ourselves.”