Facebook is developing its own deepfake videos in the order to better train its AI to accurately identify and remove misinformation, per MIT Technology Review.

Business Insider Intelligence

The world’s largest social network is apparently worried that such videos, like the infamous viral Nancy Pelosi deepfake, could present “catastrophic consequences” in the upcoming US elections given their ability to convincingly spread misinformation. Enhancing its ability to combat disinformation is particularly important for the social giant ahead of the 2020 US election as it looks to avoid a repeat of its reputation-tarnishing 2016 election meddling scandal.

This effort to combat deepfakes is evidence that Facebook is developing more sophisticated methods to detect and remove disinformation as such content has itself become more sophisticated. The videos represent an ideal way to spread a false message, because when they’re done well they’re extremely convincing: Kim Kardashian had to make a copyright claim to get her deepfake taken down after fans were convinced she was promoting a “shadowy organization” called Spectre, per Digital Trends.

There are arguments on both sides of the aisle about the severity of the problem, with some from intelligence background pushing for a ban and others arguing the threat is overblown. But platforms like Facebook have plenty to lose by being caught off-guard should the threat prove real, making preventive action a wise course regardless.

The company has also been testing other new, albeit less technical, methods of preventing election misinformation worldwide. For instance, Facebook tested “ war rooms” — designated teams to monitor the platform in real-time — ahead of and during elections in the EU, Brazil, India, and across Africa.

But recent reporting has suggested Facebook’s AI tools are more limited than previously understood, raising questions about whether its tech can handle increasingly sophisticated threats like deep fakes. A few weeks ago, Facebook was exposed for having less advanced AI tech than it had marketed to users: It turned out that the “technology” transcribing users’ messages was actually pretty human-being heavy, a fact never explicitly stated in its privacy policy.

While the EU is investigating whether or not the scandal violates GDPR, my main takeaway was that Facebook’s machine learning is behind what it would have the public believe. This isn’t a completely new revelation: Others have reported on the shortcomings of the company’s highly touted AI when it comes to removing ordinary harmful posts that are not even sophisticated in the ways a deepfake is.

Reports of this nature expose a concerning disconnect between Facebook’s messaging to the public and its true AI capabilities and suggest that new developments — like its deepfake initiative — should be met with skepticism by users, brands, and regulators.

Interested in getting the full story? Here are three ways to get access:

  1. Sign up for the Digital Media Briefing to get it delivered to your inbox 6x a week. >> Get Started
  2. Subscribe to a Premium pass to Business Insider Intelligence and gain immediate access to the Digital Media Briefing, plus more than 250 other expertly researched reports. As an added bonus, you’ll also gain access to all future reports and daily newsletters to ensure you stay ahead of the curve and benefit personally and professionally. >> Learn More Now
  3. Current subscribers can read the full briefing here.

Source link


Please enter your comment!
Please enter your name here