Skip to main content

Deep fake and targeted communication

August 20, 2019 | Expert Insights

Background 

The progenitor of the artificial intelligence (AI) enabled deep fake is a technique called Generative Adversarial Network (GAN). A complex system of feeding data to GAN works on two neural networks–the generator and the discriminator. 

Morphed outputs such as videos, voices or handwritings are produced by the generator by mimicking the training data. It is then compared by the discriminator to the training data to determine if it is real. If there are any discrepancies, the discriminator sends it back to the generator which creates an output closer to the original. Until the output is indistinguishable from the original, there is a back and forth passing of data and a perfectly doctored video or image is recreated.

Governments are gravely threatened by the speed at which the incentive to share interesting content outweighs the need for verification. 

In 2017's "Synthesizing Obama" program, the former president appeared to speak words from an alternative soundtrack. About 14 hours of footage from the public domain was used to recreate facial and lip movements combining it with audio clips. In 2019, Democrat Nancy Pelosi was a victim of the same method. A manipulated video depicted a slightly slowed-down Pelosi making it appear as though she was slurring the words. President Trump tweeted out the video and has so far garnered more than 95,000 likes.

Analysis

In the past, the deep fake label only applied to a manipulated or doctored video, audio or images, a face-swapping technique. It began in the world of pornography where celebrity faces were swapped with a porn star's face then uploaded to the internet. These videos were easier to detect.

Hollywood, too, has been doing this for years through computer-generated images or CGI. But the difference between valid CGI work and deep fakes is the malicious intent that often cloaks these videos. In today's roller-coaster technological world, hackers are incredibly more sophisticated by using open source software and machine learning algorithms, raising the bar as to how destructive they can be. 

With the advent of tools that allow for the creation of artificial intelligence manipulated videos, tech companies to politicians are in a race to combat deep fake technology. 

In July 2019, Symantec, a cybersecurity company, revealed three cases of deep fake audios being used to trick senior financial controllers into transferring money. For example, $10m was wired to criminals who used artificial intelligence to impersonate an executive down the phone. Other potential attacks include market manipulation. For example, by producing a video of a chief executive announcing a fake merger or false earnings in order to shift the share price or sabotage the brand.

Around 500 hours of videos are being uploaded to YouTube each minute which makes manual detection laborious. "Some detection methods are really accurate, but right now there's not enough data out there to build a data set for the detection model," said Matthew Price, a principal research engineer at Baltimore-based ZeroFOX.

In May 2018, the Pentagon's Defense Advanced Research Projects Agency awarded three contracts to a nonprofit group called SRI International to work on its "media forensics" research programme. Amber, a company in New York with a bolder vision for cleaning up the internet, rolled out the plan for software embedded in smartphone cameras to act as a kind of watermark. Startups such as ProofMode and Truepic have offered a technology that stamps photos with a watermark to prove that they can always be trusted.

DARPA also initiated a Media Forensic Program (MediFor) to develop technological tools that can automatically detect the real and unreal in deep fakes. There are ten research teams participating in the MediFor program where in they use a forensic approach or a proactive approach, sometime even both. 

At the Delp's Purdue University, the research team is currently using neural networks to detect any disparity across multiple frames in video sequences. They have also been able to detect subtle differences even as small as few pixels. 

Rumman Chowdhury, a consultant who leads "responsible AI" at Accenture, said another option was to look at preventive measures, such as requiring those who publish code for creating deep fakes to build in verification measures.

Assessment 

  • Deep fakes and targeted communication appeal to our non-rational biases and create echo chambers that help solidify the views of people inclined towards an ideology.
  • Advances in social media technology have created greater cognitive influence in shaping our views. Each time we share a like or a message, an algorithm picks it up and feeds us more of the same, reinforcing our views and shutting out any alternate perspective. It’s a kind of filter bubble that serves as invisible auto propaganda, indoctrinating us with our ideas, our desire for things that we are contextually familiar with and leaving us oblivious to the dark territory of the unknown. 
  • The weaponization of social media is a commonly used term but not enough has been done to counter targeted communication. The building blocks to develop a counter-narrative is to ensure the security of personal data. It is this data that enables the psychological profiling of individuals for targeted communication. 
  • The number of people working on the forensics side is marginal and mostly comprises academics compared to those working on developing deep fakes. Both Google and Facebook are not developing forensic techniques. 
  • The machine-learning detective will adapt quickly as new deep fake technology emerges, whereas human forensics experts will take much longer to get up to speed.
  • Legal systems of all democratic countries are likely to continue to grapple with the ethical and technological issues presented by deep fakes. 

Image Courtesy: flickr.com