Category: Online Reputation Management

Audio ForensicsComputer ForensicsCrisis ManagementOnline Reputation ManagementSocial Media

Deepfake: Its Role in Law, Perception, and Crisis Management (Part 2)

Welcome to Part 2 of Experts.com’s Deepfake Blog Series! In case you missed it, check out Part 1. The focus for Part 2 is to delve into the legal ramifications and perceptive dangers of deepfake videos, along with solutions for individuals and organizations who have been negatively affected by deceptive content. Continued insight from Audio, Video, and Photo Clarification and Tampering Expert, Bryan Neumeister, and new knowledge from fellow Experts.com Member and Online Reputation Management Expert, Shannon Wilkinson, will be included in this post.

Due to the relatively new concept and technology of deepfake content, the legal ramifications are not concrete. In fact, admitting deepfake content as evidence in some criminal and civil court cases can be a precarious endeavor because of metadata. According to the Oxford Dictionary, metadata is “information that describes other information.” Think of metadata as information found on a book. Listed is the author’s name, summary of the author, synopsis of the book, the name and location of the publishing company, etc. Metadata answers the same inquiries about videos and photographs on the internet. It has even been used to solve crimes. For example, in 2012, law enforcement found John McAfee, a man who ran from criminal prosecution for the alleged murder of his neighbor, using the metadata from a photo VICE Media, LLC released in an interview with the suspect (NPR). “The problem with metadata is when you upload any video to YouTube or Facebook, the metadata is washed because the user gives up the right to the video,” a statement by Bryan Neumeister. Reasons vary as to why metadata is removed. Some platforms have policies to disregard metadata to expedite the download time for such images and videos. However, it raises concern for those interested in preserving intellectual property (Network World). In addition to the numerous reposts a photo or video acquires, finding the original author of a post on major social media platforms poses a problem for litigants.

Entering evidence into court becomes a Chain of Custody issue (702, 902) through the Daubert Standard, which is a set of criteria used to determine the admissibility of expert witness testimony. Part of Mr. Neumeister’s expertise is to sift through the components (time stamp, camera, exposure, type of lens, etc.) of digital evidence via computer software systems to determine its authenticity or modification. One of the many techniques he uses is to look at the hash value of digital evidence. According to Mr. Neumeister, “Hash values are referred to in Daubert 702 as a way to authenticate. Think about a hash value as a digital fingerprint.” Without this set of numerical data, the most vital piece of proof needed to discern an original from a fake photograph or video, the digital evidence should be ruled as inadmissible by Daubert standards, as there is no chain of custody to a foundational original. Because deepfakes are difficult to track, and perpetrators are mainly anonymous underground individuals with limited assets, prosecuting these cases is a long-term investment without the return. From a moral perspective, justice should be served. With little or no recourse, the frustration is overwhelming for people whose character and financial future have been put in jeopardy.

Deepfakes may be complicated in the legal arena, but in the world of public perception, its role is much more forthright. In recent years, perception has become reality, and this notion rings resoundingly true regarding deepfake content. People who create and publish deceitful content have three main goals: to tarnish a person or company’s reputation, change a narrative, and ultimately influence the public. “Deepfakes are not usually done by big corporations. There is too much at stake. They are usually done by groups that have an intent to cause misdirection,” a direct quote by Mr. Neumeister. The truth about events regarding politicians, or any other public figure, has now become subjective. Like most viral posts, once a deepfake video is released, unless a user participates in research and finds other sources that confirms or denies deceptive material, people will believe what is shown on social media. There are two reasons for this: 1) it confirms an already ingrained bias, and 2) some people would rather trust the information instead of actively looking for sources that contradict the deepfake due to lack of will or information overload. Studies have shown it takes just a few seconds to convince people who are leaning the way a deepfake video is portraying a situation to believe the content. Even if there is a source that has been fact-checked and proves the contrary, the damage to a public figure’s perception has already been done.

For instance, one of the most popular types of deepfakes are centered around pornography. As discussed in Part 1, the General Adversarial Network (GANs) generated deepfake videos have a specific algorithmic structure that accumulates multitudes of any footage and mimics the desired output data. However, its blatantly realistic and high-quality footage is too exaggerated to be an authentic video. To further augment the illusion, people use techniques such as adding background noise, changing the frame rate, and editing footage out of context to make the video more “realistic.” According to Mr. Neumeister, “The more you dirty it up, the harder it is to tell … and then you’ve got enough to make something convincing that a lot of people won’t fact check.” This unfortunate reality, the emergence of different types of deepfake content can ruin the reputations of individuals and businesses across the board. Fortunately, there are methods to managing public perception.

A positive public image is one of the driving forces for success, trust, revenue, and a growing client base. For this reason, malicious and manipulative material found on the internet is threatening. The internet allows everyone to become an author, which gives users the power to post a variety of content ranging from true stories to false narratives. When businesses and organizations find themselves in a fraudulent crisis, “it can impact shareholder value, damage an organization’s reputation and credibility in the eye of consumers and customers, and result in the dismissal or stepping down of a CEO, board members, and/or other key leaders,” stated by Shannon Wilkinson, an Online Reputation Management Expert. Individuals who have less of a digital presence than organizations are more at risk for facing defamatory content. It begs the question, what types of crisis management strategies can business and individuals use to defend themselves against deepfake content?

One of the reasons why crisis emerges for organizations and public figures is due to the lack of proactiveness. Luckily, Ms. Wilkinson has provided numerous tips on how to prioritize reputation management and crisis response to build a “powerful digital firewall.” For reputation management, Ms. Wilkinson recommends:

  • Understanding how one’s business and brand appears to the world.
    • “Each Google page has 10 entries, discounting ads…The fewer you ‘own’ – meaning ones you publish… – the less control you have over your online image,” according to Ms. Wilkinson.
  • Customizing LinkedIn and Twitter profiles.
  • Publishing substantive and high-quality content related to one’s field of expertise or organizations (white papers, blogs, articles, etc.).
  • Scheduling a professional photography session.
  • Creating a personal branding website (ex: http://www.yourname.com).

As for crisis response options, there are two key components businesses and individuals must consider before crafting a recovery plan:

  • Possessing an online monitoring system alerting when one’s brand is trending on social media (ex: Google Alerts and Meltwater)
  • Seeing conversations in real time to augment one’s social presence within those digital spaces.

Below are the recommendations regarding the actual response to a crisis:

  • Social media platforms like Facebook and Twitter seem to be the more popular spaces to respond to deepfake content.
  • Updating current and existing information is a vital strategy to counter attacks.
  • Avoid engaging with anonymous commentors and trolls.
  • “Video is an excellent tool for responding to situations that result in televised content. A well-crafted video response posted on YouTube will often be included in that coverage. This strategy is often used by major companies,” a direct quote from Ms. Wilkinson.

The why behind creating, manipulating, and posting deepfakes for the world to see seems to be a moral dilemma. The motives behind uploading such misleading content are different for those who participate but nefarious, nonetheless. Legally, it remains an area of law where justice is not always served. Thanks to our Experts.com Members, Bryan Neumeister and Shannon Wilkinson, the what, when, how, and where aspects of deepfake content have been explained by people who are well-versed in their respective fields. In the height of modern technology and the rampant spread of misinformation, our Experts advise all online users, entrepreneurs, public figures, and anyone with access to the internet adequately fact-check sources encountered on the web. Those associated with businesses or happen to be public figures should prioritize developing crisis management precautions. In Mr. Neumeister’s own words, “People can destroy a city with a bomb, but they can take down a country with a computer.”