Tag: video forensics

Audio ForensicsComputer ForensicsCrisis ManagementOnline Reputation ManagementSocial Media

Deepfake: Its Role in Law, Perception, and Crisis Management (Part 2)

Welcome to Part 2 of Experts.com’s Deepfake Blog Series! In case you missed it, check out Part 1. The focus for Part 2 is to delve into the legal ramifications and perceptive dangers of deepfake videos, along with solutions for individuals and organizations who have been negatively affected by deceptive content. Continued insight from Audio, Video, and Photo Clarification and Tampering Expert, Bryan Neumeister, and new knowledge from fellow Experts.com Member and Online Reputation Management Expert, Shannon Wilkinson, will be included in this post.

Due to the relatively new concept and technology of deepfake content, the legal ramifications are not concrete. In fact, admitting deepfake content as evidence in some criminal and civil court cases can be a precarious endeavor because of metadata. According to the Oxford Dictionary, metadata is “information that describes other information.” Think of metadata as information found on a book. Listed is the author’s name, summary of the author, synopsis of the book, the name and location of the publishing company, etc. Metadata answers the same inquiries about videos and photographs on the internet. It has even been used to solve crimes. For example, in 2012, law enforcement found John McAfee, a man who ran from criminal prosecution for the alleged murder of his neighbor, using the metadata from a photo VICE Media, LLC released in an interview with the suspect (NPR). “The problem with metadata is when you upload any video to YouTube or Facebook, the metadata is washed because the user gives up the right to the video,” a statement by Bryan Neumeister. Reasons vary as to why metadata is removed. Some platforms have policies to disregard metadata to expedite the download time for such images and videos. However, it raises concern for those interested in preserving intellectual property (Network World). In addition to the numerous reposts a photo or video acquires, finding the original author of a post on major social media platforms poses a problem for litigants.

Entering evidence into court becomes a Chain of Custody issue (702, 902) through the Daubert Standard, which is a set of criteria used to determine the admissibility of expert witness testimony. Part of Mr. Neumeister’s expertise is to sift through the components (time stamp, camera, exposure, type of lens, etc.) of digital evidence via computer software systems to determine its authenticity or modification. One of the many techniques he uses is to look at the hash value of digital evidence. According to Mr. Neumeister, “Hash values are referred to in Daubert 702 as a way to authenticate. Think about a hash value as a digital fingerprint.” Without this set of numerical data, the most vital piece of proof needed to discern an original from a fake photograph or video, the digital evidence should be ruled as inadmissible by Daubert standards, as there is no chain of custody to a foundational original. Because deepfakes are difficult to track, and perpetrators are mainly anonymous underground individuals with limited assets, prosecuting these cases is a long-term investment without the return. From a moral perspective, justice should be served. With little or no recourse, the frustration is overwhelming for people whose character and financial future have been put in jeopardy.

Deepfakes may be complicated in the legal arena, but in the world of public perception, its role is much more forthright. In recent years, perception has become reality, and this notion rings resoundingly true regarding deepfake content. People who create and publish deceitful content have three main goals: to tarnish a person or company’s reputation, change a narrative, and ultimately influence the public. “Deepfakes are not usually done by big corporations. There is too much at stake. They are usually done by groups that have an intent to cause misdirection,” a direct quote by Mr. Neumeister. The truth about events regarding politicians, or any other public figure, has now become subjective. Like most viral posts, once a deepfake video is released, unless a user participates in research and finds other sources that confirms or denies deceptive material, people will believe what is shown on social media. There are two reasons for this: 1) it confirms an already ingrained bias, and 2) some people would rather trust the information instead of actively looking for sources that contradict the deepfake due to lack of will or information overload. Studies have shown it takes just a few seconds to convince people who are leaning the way a deepfake video is portraying a situation to believe the content. Even if there is a source that has been fact-checked and proves the contrary, the damage to a public figure’s perception has already been done.

For instance, one of the most popular types of deepfakes are centered around pornography. As discussed in Part 1, the General Adversarial Network (GANs) generated deepfake videos have a specific algorithmic structure that accumulates multitudes of any footage and mimics the desired output data. However, its blatantly realistic and high-quality footage is too exaggerated to be an authentic video. To further augment the illusion, people use techniques such as adding background noise, changing the frame rate, and editing footage out of context to make the video more “realistic.” According to Mr. Neumeister, “The more you dirty it up, the harder it is to tell … and then you’ve got enough to make something convincing that a lot of people won’t fact check.” This unfortunate reality, the emergence of different types of deepfake content can ruin the reputations of individuals and businesses across the board. Fortunately, there are methods to managing public perception.

A positive public image is one of the driving forces for success, trust, revenue, and a growing client base. For this reason, malicious and manipulative material found on the internet is threatening. The internet allows everyone to become an author, which gives users the power to post a variety of content ranging from true stories to false narratives. When businesses and organizations find themselves in a fraudulent crisis, “it can impact shareholder value, damage an organization’s reputation and credibility in the eye of consumers and customers, and result in the dismissal or stepping down of a CEO, board members, and/or other key leaders,” stated by Shannon Wilkinson, an Online Reputation Management Expert. Individuals who have less of a digital presence than organizations are more at risk for facing defamatory content. It begs the question, what types of crisis management strategies can business and individuals use to defend themselves against deepfake content?

One of the reasons why crisis emerges for organizations and public figures is due to the lack of proactiveness. Luckily, Ms. Wilkinson has provided numerous tips on how to prioritize reputation management and crisis response to build a “powerful digital firewall.” For reputation management, Ms. Wilkinson recommends:

  • Understanding how one’s business and brand appears to the world.
    • “Each Google page has 10 entries, discounting ads…The fewer you ‘own’ – meaning ones you publish… – the less control you have over your online image,” according to Ms. Wilkinson.
  • Customizing LinkedIn and Twitter profiles.
  • Publishing substantive and high-quality content related to one’s field of expertise or organizations (white papers, blogs, articles, etc.).
  • Scheduling a professional photography session.
  • Creating a personal branding website (ex: http://www.yourname.com).

As for crisis response options, there are two key components businesses and individuals must consider before crafting a recovery plan:

  • Possessing an online monitoring system alerting when one’s brand is trending on social media (ex: Google Alerts and Meltwater)
  • Seeing conversations in real time to augment one’s social presence within those digital spaces.

Below are the recommendations regarding the actual response to a crisis:

  • Social media platforms like Facebook and Twitter seem to be the more popular spaces to respond to deepfake content.
  • Updating current and existing information is a vital strategy to counter attacks.
  • Avoid engaging with anonymous commentors and trolls.
  • “Video is an excellent tool for responding to situations that result in televised content. A well-crafted video response posted on YouTube will often be included in that coverage. This strategy is often used by major companies,” a direct quote from Ms. Wilkinson.

The why behind creating, manipulating, and posting deepfakes for the world to see seems to be a moral dilemma. The motives behind uploading such misleading content are different for those who participate but nefarious, nonetheless. Legally, it remains an area of law where justice is not always served. Thanks to our Experts.com Members, Bryan Neumeister and Shannon Wilkinson, the what, when, how, and where aspects of deepfake content have been explained by people who are well-versed in their respective fields. In the height of modern technology and the rampant spread of misinformation, our Experts advise all online users, entrepreneurs, public figures, and anyone with access to the internet adequately fact-check sources encountered on the web. Those associated with businesses or happen to be public figures should prioritize developing crisis management precautions. In Mr. Neumeister’s own words, “People can destroy a city with a bomb, but they can take down a country with a computer.”

Audio ForensicsComputer ForensicsExpert WitnessSocial Media

Deepfake: An Introduction (Part 1)

Computer technology is one of the most pivotal inventions in modern history. Artificial Intelligence, smartphones, social media, and all related apparatus have significantly enhanced living conditions in an unprecedented manner and connected the world with a click of a button. It is used in various occupations: from business related fields to more creative professions. To say modern technology has been advantageous in recent decades is an understatement. However, every creation has its flaws. This multi-part blog series is intended to reveal one of those flaws, and a dangerous one at that, deepfake videos. This first post includes an introduction to deepfake videos, and the steps taken by federal and state governments to identify such duplicitous content. Special insight on the subject is provided by our Experts.com Member and Audio, Video, and Photo Clarification and Tampering Expert, Bryan Neumeister.

Editing footage and photos is normal practice in our selfie-addicted new normal, but creating distorted content is a whole new ballgame. According to CNBC, deepfakes are “falsified videos made by means of deep learning.” These videos, images, audios, or other digital forms of content are manipulated such that counterfeits pass as the real thing. What makes matters worse is the internet allows anyone and everyone to create, edit, and post deceptive content. It is one of many threats to cybersecurity strategists, police departments, politicians, and industries alike because the purpose of making them is to spread misinformation, tarnish reputation’s, exploit evidence, and to ultimately deceive an audience. The unfortunate reality is deepfake videos which display pornographic scenarios and manipulated political moment are the most common. For instance, a notable deepfake video was posted by Buzzfeed in 2018 depicting former United States president, Barack Obama, slandering another former United States president, Donald Trump. However, the voice behind Obama is none other than Jordan Peele. The video was intended as a moral lesson to explain how important it is to verify online sources, and to highlight the dangerous problem of trusting every post uploaded to the internet.

According to Mr. Neumeister, who specializes in this area of expertise, there are two types of artificial intelligence programs used to create deepfake videos: GANs and FUDs. He states, “GANs (Generative Adversarial Networks) are used by professionals, and FUDs (Fear, Uncertainty, and Doubt) are the homemade ones.” Although FUD videos garner more attention among internet users, the real menace to society are the videos made from GANs.

Videos made from Generative Adversarial Networks have an algorithmic framework designed to acquire input data and mimic the desired output data. One can visualize how GANs work through the viral Tom Cruise TikTok deepfake. According to NPR, the creator of the deepfake, Chris Ume, used a machine-learning algorithm to insert an accumulation of Tom Cruise footage. This allowed him to give a digital face transplant to the Tom Cruise lookalike actor he hired for the video. Ume input a plethora of videos to create a desired output of a realistic face swap. Neumeister also adds that the most realistic deepfakes correlate to the amount of footage a person can acquire. Specifically, “the more bits of video clip you have to put together, the more accurate you can make facial movements, ticks, etc.” From this logic, it can be inferred that Ume’s Tom Cruise deepfake looks more realistic than those that lack algorithmic programs.

Because viewers typically see deepfakes in politics and pornography, federal and state governments have recently implemented laws to counteract deepfake content creation and distribution. President Trump signed the first deepfake federal law near the end of 2019. This legislation is included in the National Defense Authorization Act for Fiscal Year 2020 (NDAA), which is a $738 billion defense policy bill passed by both Senate (86-8) and the House (377-48). The two provisions in the NDAA requires:
“(1) a comprehensive report on the foreign weaponization of deepfakes; (2) requires the government to notify Congress of foreign deepfake-disinformation activities targeting US elections,” (JD Supra). The NDAA also implemented a “Deepfakes Prize” competition to promote the investigation of deepfake-detection technologies. On a state level, there have been laws passed by multiple states that criminalize specific deepfake videos (JD Supra):

  • Virginia: first state to establish criminal penalties on the spread of nonconsensual deepfake pornography.
  • Texas: first state to ban creation and dissemination of deepfake videos aimed to alter elections or harm candidates for public office.
  • California: victims of nonconsensual deepfake pornography can sue for damages; candidates for public office can sue organizations and individuals that maliciously spread election-related deepfakes without warning labels near Election Day.

Although the Trump administration and various states established policies against deepfakes, it remains ubiquitous on almost all online platforms. How can users at home distinguish authentic content from deepfakes?

Mr. Neumeister provides a few tips and tricks for detecting a deepfake. One giveaway mentioned is mouth movement, otherwise known as phonemes and visemes. Mouths move a certain way when people speak. For instance, words like mama, baba, and papa start with a closed mouth. Words like father, and violin start with the front teeth pushing against the bottom lip. To add, consonants and vowels also sound a certain way when pronounced correctly. “Words with t, f, n, o, and wh, are pretty good for tells,” adds Mr. Neumeister. When analyzing video, the frames in which a person is speaking are broken down into approximately six to ten frames to determine if the way someone talks in other videos is the same as the video being analyzed. Another tip Mr. Neumeister suggests is to watch videos with context in mind. Viewers should pay attention to background noise, crowd ambiance, and the cadence in a speaker’s sentences. Authentic and original content would have, by nature, realistic frames. Users can detect a deepfake by sensing dissonance in, for instance, a speaker’s proximity to the microphone or a size of a room. For users at home or on-the-go, these tips are crucial for distinguishing verified sources from manipulated misinformation.

The emergence of deepfake content, its continuously improving technology, and the spread of disinformation is a multifaceted and complex problem. This blog post has only scratched the surface, so stay tuned for part 2 for a more in-depth read.