Computer technology is one of the most pivotal inventions in modern history. Artificial Intelligence, smartphones, social media, and all related apparatus have significantly enhanced living conditions in an unprecedented manner and connected the world with a click of a button. It is used in various occupations: from business related fields to more creative professions. To say modern technology has been advantageous in recent decades is an understatement. However, every creation has its flaws. This multi-part blog series is intended to reveal one of those flaws, and a dangerous one at that, deepfake videos. This first post includes an introduction to deepfake videos, and the steps taken by federal and state governments to identify such duplicitous content. Special insight on the subject is provided by our Experts.com Member and Audio, Video, and Photo Clarification and Tampering Expert, Bryan Neumeister.

Editing footage and photos is normal practice in our selfie-addicted new normal, but creating distorted content is a whole new ballgame. According to CNBC, deepfakes are “falsified videos made by means of deep learning.” These videos, images, audios, or other digital forms of content are manipulated such that counterfeits pass as the real thing. What makes matters worse is the internet allows anyone and everyone to create, edit, and post deceptive content. It is one of many threats to cybersecurity strategists, police departments, politicians, and industries alike because the purpose of making them is to spread misinformation, tarnish reputation’s, exploit evidence, and to ultimately deceive an audience. The unfortunate reality is deepfake videos which display pornographic scenarios and manipulated political moment are the most common. For instance, a notable deepfake video was posted by Buzzfeed in 2018 depicting former United States president, Barack Obama, slandering another former United States president, Donald Trump. However, the voice behind Obama is none other than Jordan Peele. The video was intended as a moral lesson to explain how important it is to verify online sources, and to highlight the dangerous problem of trusting every post uploaded to the internet.

According to Mr. Neumeister, who specializes in this area of expertise, there are two types of artificial intelligence programs used to create deepfake videos: GANs and FUDs. He states, “GANs (Generative Adversarial Networks) are used by professionals, and FUDs (Fear, Uncertainty, and Doubt) are the homemade ones.” Although FUD videos garner more attention among internet users, the real menace to society are the videos made from GANs.

Videos made from Generative Adversarial Networks have an algorithmic framework designed to acquire input data and mimic the desired output data. One can visualize how GANs work through the viral Tom Cruise TikTok deepfake. According to NPR, the creator of the deepfake, Chris Ume, used a machine-learning algorithm to insert an accumulation of Tom Cruise footage. This allowed him to give a digital face transplant to the Tom Cruise lookalike actor he hired for the video. Ume input a plethora of videos to create a desired output of a realistic face swap. Neumeister also adds that the most realistic deepfakes correlate to the amount of footage a person can acquire. Specifically, “the more bits of video clip you have to put together, the more accurate you can make facial movements, ticks, etc.” From this logic, it can be inferred that Ume’s Tom Cruise deepfake looks more realistic than those that lack algorithmic programs.

Because viewers typically see deepfakes in politics and pornography, federal and state governments have recently implemented laws to counteract deepfake content creation and distribution. President Trump signed the first deepfake federal law near the end of 2019. This legislation is included in the National Defense Authorization Act for Fiscal Year 2020 (NDAA), which is a $738 billion defense policy bill passed by both Senate (86-8) and the House (377-48). The two provisions in the NDAA requires:
“(1) a comprehensive report on the foreign weaponization of deepfakes; (2) requires the government to notify Congress of foreign deepfake-disinformation activities targeting US elections,” (JD Supra). The NDAA also implemented a “Deepfakes Prize” competition to promote the investigation of deepfake-detection technologies. On a state level, there have been laws passed by multiple states that criminalize specific deepfake videos (JD Supra):

  • Virginia: first state to establish criminal penalties on the spread of nonconsensual deepfake pornography.
  • Texas: first state to ban creation and dissemination of deepfake videos aimed to alter elections or harm candidates for public office.
  • California: victims of nonconsensual deepfake pornography can sue for damages; candidates for public office can sue organizations and individuals that maliciously spread election-related deepfakes without warning labels near Election Day.

Although the Trump administration and various states established policies against deepfakes, it remains ubiquitous on almost all online platforms. How can users at home distinguish authentic content from deepfakes?

Mr. Neumeister provides a few tips and tricks for detecting a deepfake. One giveaway mentioned is mouth movement, otherwise known as phonemes and visemes. Mouths move a certain way when people speak. For instance, words like mama, baba, and papa start with a closed mouth. Words like father, and violin start with the front teeth pushing against the bottom lip. To add, consonants and vowels also sound a certain way when pronounced correctly. “Words with t, f, n, o, and wh, are pretty good for tells,” adds Mr. Neumeister. When analyzing video, the frames in which a person is speaking are broken down into approximately six to ten frames to determine if the way someone talks in other videos is the same as the video being analyzed. Another tip Mr. Neumeister suggests is to watch videos with context in mind. Viewers should pay attention to background noise, crowd ambiance, and the cadence in a speaker’s sentences. Authentic and original content would have, by nature, realistic frames. Users can detect a deepfake by sensing dissonance in, for instance, a speaker’s proximity to the microphone or a size of a room. For users at home or on-the-go, these tips are crucial for distinguishing verified sources from manipulated misinformation.

The emergence of deepfake content, its continuously improving technology, and the spread of disinformation is a multifaceted and complex problem. This blog post has only scratched the surface, so stay tuned for part 2 for a more in-depth read.

Posted by Hana Zumout

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.