Deep Fakes: A Tough Nut to Crack

February 8, 2019

If you are in the media or intelligence business, you undoubtedly already know about the potential of deep fakes or “deepfake” videos. Clips that utilize AI technology to create realistic and completely fake videos using existing footage. The catch is that they are getting more and more convincing…and that’s not good, as we discovered in a recent Phys.org article, “Misinformation Woes Could Multiply with Deepfake Videos.”

According to the story:

“As the technology advances, worries are growing about how deepfakes can be used for nefarious purposes by hackers or state actors. ‘A well-timed and thoughtfully scripted deepfake or series of deepfakes could tip an election, spark violence in a city primed for civil unrest, bolster insurgent narratives about an enemy’s supposed atrocities, or exacerbate political divisions in a society.’”

What’s “true” and what’s “false” is an issue which may not lend itself to zeros and ones. Google asserts that it is developing software that helps spot deepfakes. Does Google have a solution?

Does anyone?

If an artifact is created and someone labels it “false,” smart software has to decide. Humans, history suggests, struggle with defining the truth.

The problem is likely to be difficult to resolve. Censorship anyone?

Patrick Roland, February 8, 2019

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta