Category: Social Media

Audio ForensicsComputer ForensicsCrisis ManagementOnline Reputation ManagementSocial Media

Deepfake: Its Role in Law, Perception, and Crisis Management (Part 2)

Welcome to Part 2 of Experts.com’s Deepfake Blog Series! In case you missed it, check out Part 1. The focus for Part 2 is to delve into the legal ramifications and perceptive dangers of deepfake videos, along with solutions for individuals and organizations who have been negatively affected by deceptive content. Continued insight from Audio, Video, and Photo Clarification and Tampering Expert, Bryan Neumeister, and new knowledge from fellow Experts.com Member and Online Reputation Management Expert, Shannon Wilkinson, will be included in this post.

Due to the relatively new concept and technology of deepfake content, the legal ramifications are not concrete. In fact, admitting deepfake content as evidence in some criminal and civil court cases can be a precarious endeavor because of metadata. According to the Oxford Dictionary, metadata is “information that describes other information.” Think of metadata as information found on a book. Listed is the author’s name, summary of the author, synopsis of the book, the name and location of the publishing company, etc. Metadata answers the same inquiries about videos and photographs on the internet. It has even been used to solve crimes. For example, in 2012, law enforcement found John McAfee, a man who ran from criminal prosecution for the alleged murder of his neighbor, using the metadata from a photo VICE Media, LLC released in an interview with the suspect (NPR). “The problem with metadata is when you upload any video to YouTube or Facebook, the metadata is washed because the user gives up the right to the video,” a statement by Bryan Neumeister. Reasons vary as to why metadata is removed. Some platforms have policies to disregard metadata to expedite the download time for such images and videos. However, it raises concern for those interested in preserving intellectual property (Network World). In addition to the numerous reposts a photo or video acquires, finding the original author of a post on major social media platforms poses a problem for litigants.

Entering evidence into court becomes a Chain of Custody issue (702, 902) through the Daubert Standard, which is a set of criteria used to determine the admissibility of expert witness testimony. Part of Mr. Neumeister’s expertise is to sift through the components (time stamp, camera, exposure, type of lens, etc.) of digital evidence via computer software systems to determine its authenticity or modification. One of the many techniques he uses is to look at the hash value of digital evidence. According to Mr. Neumeister, “Hash values are referred to in Daubert 702 as a way to authenticate. Think about a hash value as a digital fingerprint.” Without this set of numerical data, the most vital piece of proof needed to discern an original from a fake photograph or video, the digital evidence should be ruled as inadmissible by Daubert standards, as there is no chain of custody to a foundational original. Because deepfakes are difficult to track, and perpetrators are mainly anonymous underground individuals with limited assets, prosecuting these cases is a long-term investment without the return. From a moral perspective, justice should be served. With little or no recourse, the frustration is overwhelming for people whose character and financial future have been put in jeopardy.

Deepfakes may be complicated in the legal arena, but in the world of public perception, its role is much more forthright. In recent years, perception has become reality, and this notion rings resoundingly true regarding deepfake content. People who create and publish deceitful content have three main goals: to tarnish a person or company’s reputation, change a narrative, and ultimately influence the public. “Deepfakes are not usually done by big corporations. There is too much at stake. They are usually done by groups that have an intent to cause misdirection,” a direct quote by Mr. Neumeister. The truth about events regarding politicians, or any other public figure, has now become subjective. Like most viral posts, once a deepfake video is released, unless a user participates in research and finds other sources that confirms or denies deceptive material, people will believe what is shown on social media. There are two reasons for this: 1) it confirms an already ingrained bias, and 2) some people would rather trust the information instead of actively looking for sources that contradict the deepfake due to lack of will or information overload. Studies have shown it takes just a few seconds to convince people who are leaning the way a deepfake video is portraying a situation to believe the content. Even if there is a source that has been fact-checked and proves the contrary, the damage to a public figure’s perception has already been done.

For instance, one of the most popular types of deepfakes are centered around pornography. As discussed in Part 1, the General Adversarial Network (GANs) generated deepfake videos have a specific algorithmic structure that accumulates multitudes of any footage and mimics the desired output data. However, its blatantly realistic and high-quality footage is too exaggerated to be an authentic video. To further augment the illusion, people use techniques such as adding background noise, changing the frame rate, and editing footage out of context to make the video more “realistic.” According to Mr. Neumeister, “The more you dirty it up, the harder it is to tell … and then you’ve got enough to make something convincing that a lot of people won’t fact check.” This unfortunate reality, the emergence of different types of deepfake content can ruin the reputations of individuals and businesses across the board. Fortunately, there are methods to managing public perception.

A positive public image is one of the driving forces for success, trust, revenue, and a growing client base. For this reason, malicious and manipulative material found on the internet is threatening. The internet allows everyone to become an author, which gives users the power to post a variety of content ranging from true stories to false narratives. When businesses and organizations find themselves in a fraudulent crisis, “it can impact shareholder value, damage an organization’s reputation and credibility in the eye of consumers and customers, and result in the dismissal or stepping down of a CEO, board members, and/or other key leaders,” stated by Shannon Wilkinson, an Online Reputation Management Expert. Individuals who have less of a digital presence than organizations are more at risk for facing defamatory content. It begs the question, what types of crisis management strategies can business and individuals use to defend themselves against deepfake content?

One of the reasons why crisis emerges for organizations and public figures is due to the lack of proactiveness. Luckily, Ms. Wilkinson has provided numerous tips on how to prioritize reputation management and crisis response to build a “powerful digital firewall.” For reputation management, Ms. Wilkinson recommends:

  • Understanding how one’s business and brand appears to the world.
    • “Each Google page has 10 entries, discounting ads…The fewer you ‘own’ – meaning ones you publish… – the less control you have over your online image,” according to Ms. Wilkinson.
  • Customizing LinkedIn and Twitter profiles.
  • Publishing substantive and high-quality content related to one’s field of expertise or organizations (white papers, blogs, articles, etc.).
  • Scheduling a professional photography session.
  • Creating a personal branding website (ex: http://www.yourname.com).

As for crisis response options, there are two key components businesses and individuals must consider before crafting a recovery plan:

  • Possessing an online monitoring system alerting when one’s brand is trending on social media (ex: Google Alerts and Meltwater)
  • Seeing conversations in real time to augment one’s social presence within those digital spaces.

Below are the recommendations regarding the actual response to a crisis:

  • Social media platforms like Facebook and Twitter seem to be the more popular spaces to respond to deepfake content.
  • Updating current and existing information is a vital strategy to counter attacks.
  • Avoid engaging with anonymous commentors and trolls.
  • “Video is an excellent tool for responding to situations that result in televised content. A well-crafted video response posted on YouTube will often be included in that coverage. This strategy is often used by major companies,” a direct quote from Ms. Wilkinson.

The why behind creating, manipulating, and posting deepfakes for the world to see seems to be a moral dilemma. The motives behind uploading such misleading content are different for those who participate but nefarious, nonetheless. Legally, it remains an area of law where justice is not always served. Thanks to our Experts.com Members, Bryan Neumeister and Shannon Wilkinson, the what, when, how, and where aspects of deepfake content have been explained by people who are well-versed in their respective fields. In the height of modern technology and the rampant spread of misinformation, our Experts advise all online users, entrepreneurs, public figures, and anyone with access to the internet adequately fact-check sources encountered on the web. Those associated with businesses or happen to be public figures should prioritize developing crisis management precautions. In Mr. Neumeister’s own words, “People can destroy a city with a bomb, but they can take down a country with a computer.”

Audio ForensicsComputer ForensicsExpert WitnessSocial Media

Deepfake: An Introduction (Part 1)

Computer technology is one of the most pivotal inventions in modern history. Artificial Intelligence, smartphones, social media, and all related apparatus have significantly enhanced living conditions in an unprecedented manner and connected the world with a click of a button. It is used in various occupations: from business related fields to more creative professions. To say modern technology has been advantageous in recent decades is an understatement. However, every creation has its flaws. This multi-part blog series is intended to reveal one of those flaws, and a dangerous one at that, deepfake videos. This first post includes an introduction to deepfake videos, and the steps taken by federal and state governments to identify such duplicitous content. Special insight on the subject is provided by our Experts.com Member and Audio, Video, and Photo Clarification and Tampering Expert, Bryan Neumeister.

Editing footage and photos is normal practice in our selfie-addicted new normal, but creating distorted content is a whole new ballgame. According to CNBC, deepfakes are “falsified videos made by means of deep learning.” These videos, images, audios, or other digital forms of content are manipulated such that counterfeits pass as the real thing. What makes matters worse is the internet allows anyone and everyone to create, edit, and post deceptive content. It is one of many threats to cybersecurity strategists, police departments, politicians, and industries alike because the purpose of making them is to spread misinformation, tarnish reputation’s, exploit evidence, and to ultimately deceive an audience. The unfortunate reality is deepfake videos which display pornographic scenarios and manipulated political moment are the most common. For instance, a notable deepfake video was posted by Buzzfeed in 2018 depicting former United States president, Barack Obama, slandering another former United States president, Donald Trump. However, the voice behind Obama is none other than Jordan Peele. The video was intended as a moral lesson to explain how important it is to verify online sources, and to highlight the dangerous problem of trusting every post uploaded to the internet.

According to Mr. Neumeister, who specializes in this area of expertise, there are two types of artificial intelligence programs used to create deepfake videos: GANs and FUDs. He states, “GANs (Generative Adversarial Networks) are used by professionals, and FUDs (Fear, Uncertainty, and Doubt) are the homemade ones.” Although FUD videos garner more attention among internet users, the real menace to society are the videos made from GANs.

Videos made from Generative Adversarial Networks have an algorithmic framework designed to acquire input data and mimic the desired output data. One can visualize how GANs work through the viral Tom Cruise TikTok deepfake. According to NPR, the creator of the deepfake, Chris Ume, used a machine-learning algorithm to insert an accumulation of Tom Cruise footage. This allowed him to give a digital face transplant to the Tom Cruise lookalike actor he hired for the video. Ume input a plethora of videos to create a desired output of a realistic face swap. Neumeister also adds that the most realistic deepfakes correlate to the amount of footage a person can acquire. Specifically, “the more bits of video clip you have to put together, the more accurate you can make facial movements, ticks, etc.” From this logic, it can be inferred that Ume’s Tom Cruise deepfake looks more realistic than those that lack algorithmic programs.

Because viewers typically see deepfakes in politics and pornography, federal and state governments have recently implemented laws to counteract deepfake content creation and distribution. President Trump signed the first deepfake federal law near the end of 2019. This legislation is included in the National Defense Authorization Act for Fiscal Year 2020 (NDAA), which is a $738 billion defense policy bill passed by both Senate (86-8) and the House (377-48). The two provisions in the NDAA requires:
“(1) a comprehensive report on the foreign weaponization of deepfakes; (2) requires the government to notify Congress of foreign deepfake-disinformation activities targeting US elections,” (JD Supra). The NDAA also implemented a “Deepfakes Prize” competition to promote the investigation of deepfake-detection technologies. On a state level, there have been laws passed by multiple states that criminalize specific deepfake videos (JD Supra):

  • Virginia: first state to establish criminal penalties on the spread of nonconsensual deepfake pornography.
  • Texas: first state to ban creation and dissemination of deepfake videos aimed to alter elections or harm candidates for public office.
  • California: victims of nonconsensual deepfake pornography can sue for damages; candidates for public office can sue organizations and individuals that maliciously spread election-related deepfakes without warning labels near Election Day.

Although the Trump administration and various states established policies against deepfakes, it remains ubiquitous on almost all online platforms. How can users at home distinguish authentic content from deepfakes?

Mr. Neumeister provides a few tips and tricks for detecting a deepfake. One giveaway mentioned is mouth movement, otherwise known as phonemes and visemes. Mouths move a certain way when people speak. For instance, words like mama, baba, and papa start with a closed mouth. Words like father, and violin start with the front teeth pushing against the bottom lip. To add, consonants and vowels also sound a certain way when pronounced correctly. “Words with t, f, n, o, and wh, are pretty good for tells,” adds Mr. Neumeister. When analyzing video, the frames in which a person is speaking are broken down into approximately six to ten frames to determine if the way someone talks in other videos is the same as the video being analyzed. Another tip Mr. Neumeister suggests is to watch videos with context in mind. Viewers should pay attention to background noise, crowd ambiance, and the cadence in a speaker’s sentences. Authentic and original content would have, by nature, realistic frames. Users can detect a deepfake by sensing dissonance in, for instance, a speaker’s proximity to the microphone or a size of a room. For users at home or on-the-go, these tips are crucial for distinguishing verified sources from manipulated misinformation.

The emergence of deepfake content, its continuously improving technology, and the spread of disinformation is a multifaceted and complex problem. This blog post has only scratched the surface, so stay tuned for part 2 for a more in-depth read.

CommunicationInformation & Communication TechnologySocial Media

The Role of Section 230 in the Free Speech Debate

After a tumultuous year full of uncertainty and angst, the start of the new year, unfortunately, followed suit. Due to last week’s raid of the Capitol Building, resulting in Donald Trump’s removal from various social media apps, the debate over the understanding of free speech is in full swing. Some critics say Trump incited violence and rightfully deserved to be permanently banned on Twitter. Others defend the President’s speech and are calling to repeal Section 230 of the Communications Decency Act. This week’s post will define Section 230 and its role in the free speech debate. 

To note, the purpose of creating the Communications Decency Act was to enact provisions to free speech online. Because Internet users opposed these restrictions, Section 230 was enacted in 1996 (Electronic Frontier Foundation). According to the Federal Communications Commission (FCC), “Section 230 provides websites, including social media companies, that host or moderate content generated by others with immunity from liability.” In other words, these companies do not bear the responsibility for its consumers’ speech. Section 230 is inapplicable to Federal Criminal Law and Intellectual Property Claims. Since Twitter is a private company, this legally legitimizes its decision to permanently suspend the President’s account, as he allegedly spread misinformation about the election according to its Terms & Conditions. However, this turn of events has left moderates, conservatives, and republicans feeling silenced.  

Trump’s Twitter ban was the catalyst for the removal of Parler (a social media platform which garnered a primarily conservative following) from Apple and Amazon app stores. Its eradication stems from its anti-censorship brand, meaning it does not monitor its users posts. Unlike Twitter, who uses Section 230 to monitor speech, Parler has the right as a private company to exercise the opposite. It begs the question, is Section 230 relevant to free speech? 

The First Amendment “guarantees freedoms concerning religion, expression, assembly, and the right to petition,” (Cornell Law School). Congress is prohibited from making laws which limit an individual’s First Amendment right, whether it is exercised in public physical space or on the internet. From the looks of Trump’s removal from Twitter, it is understandable why conservatives would be upset. The concept of a social media corporation eradicating the leader of the free world’s personal account is shocking, and shows just how much power these social media apps have over what their viewers are allowed to see. For many, these actions by Twitter and Facebook add even more salt to the wounds of the political divide created this past year. At face value, it makes sense why moderate and right-leaning voters would want to repeal Section 230. However, revoking Section 230 is much more threatening to the First Amendment than one might think (USA Today).  

If Section 230 was abrogated, online businesses would monitor speech on a more frequent basis. Websites would become liable for every individual social media post, photo, blog, comment, and video a person publishes. Accommodating user-created content would be a precarious endeavor because these companies could be sued for every contentious post, which is unrealistic considering these websites have accumulated millions of users worldwide. If social media companies and those alike embodied an editorial role towards user-created content, it would end real-time communication, limit expression, tarnish social media providers’ reputations, and even cause them to shut down due to endless litigation. In the event Section 230 is repealed and edited, Congress must be cautious of its constitutional duty to not implement laws that limit the freedoms of American citizens and, unintentionally, chill protected speech. 

Section 230 may protect a business’ right to negate liability for its users’ posts, but it does not protect a company from antitrust lawsuits. Parler sued Amazon in response to its removal from Amazon Web Services, an auxiliary provider of on-demand APIs and cloud-computing platforms (Reuters). Amazon claims Parler’s failure to monitor speech had a large role in planning the siege of the Capitol Building. Although it removed most of the troublesome posts, Parler responded to this by accusing Amazon of breaching its contract by forcing the social media app to shut down. Parler was warned about Amazon’s intolerance to offensive speech, yet Parler argued that any of its users’ posts, that do not engender premeditated action, are protected under the First Amendment. As this is an ongoing case, the outcome of the lawsuit will not be decided for a long time to come. 

Ultimately, Section 230 is arguably the most integral component of the free speech debate considering the recent events of Trump’s Twitter ban and Parler’s lawsuit against Amazon. Free speech within the realm of the internet is a very different arena compared to speech in public physical spaces. As unfortunate as the Capitol Building raid was, it brought to light important nuances of the First Amendment as it relates to the internet.  

TikTok Logo
Computer SecurityComputersInformation & Communication TechnologySocial Media

TikTok: Is It The Next Cyber-Security Threat?

TikTok has been the most downloaded app globally in 2020. Although it has existed since 2018, TikTok surpassed 2 billion downloads back in April, during the apex of the new socially-distanced reality engendered by the pandemic. The ability to share and create content such as comedy skits, dance challenges, and lip-syncing clips, has appealed to various age groups around the world, especially teenagers. However, TikTok has been at the center of controversy for raising cyber security concerns not just here in the United States, but around the world. 

The problem with TikTok is twofold. The first issue is the app is owned by a Chinese company called ByteDance. Because ByteDance is not American-based, it does not follow U.S. federal and state consumer privacy laws. TikTok announced the data collected by American users is backed-up in Singapore, which is not subject to Chinese law. Though true, it is possible the Chinese government could pressure ByteDance to relinquish its user information. 

Second, TikTok has a large accumulation of data related to the types of videos Americans watch and post. Because it has turned into an important platform for political activism, people are worried the Chinese government could influence public opinion and control speech. For instance, according to both The Guardian and The Intercept, last year, TikTok company officials told their employees to censor content considered sensitive to Beijing. TikTok claimed their policies were outdated when the reports were released. As a result of this incident, they established a “transparency center” so security and technology experts from around the world can observe their policies. 

Despite TikTok’s official statement, President Donald Trump issued an Executive Order in August declaring the prohibition of all business with ByteDance. Unless ByteDance announces a plan to sell TikTok, the app will be banned on September 29th, 2020. Several American agencies and companies, such as the U.S. Army and Wells Fargo, have been proactive, requiring servicemen and employees to uninstall the app in response to these security concerns. Other countries, like India, have followed suit, banning the app altogether. 

Many people, including computer security experts, believe banning the app in the United States would be an extreme course of action. Not only would it invite questions about censorship in a free country right before an election, but it would affect various companies here in the U.S. who use the platform for marketing purposes. A solution technology experts have mentioned is to implement policies for protecting consumer privacy and measures to minimize data misuse from companies around the world. Currently, with the exception of a few state laws, the responsibility of American privacy and data sharing belongs to companies such as TikTok, Facebook, and Twitter. 

On September 14th, 2020, ByteDance accepted Oracle’s proposal to be their new technology provider. This means Oracle would be held accountable for protecting all user information collected through TikTok. Although this deal is pending approval by the U.S. government, this would keep businesses invested in TikTok afloat and allow up to 100 million users to continue posting creative content. Tresury Secretary, Steve Mnuchin, told CNBC that the government will be reviewing the proposal this week, as their top priority is to keep American user data from the Chinese Communist Party.   

Four days later, the U.S. government announced the removal of TikTok and fellow Chinese app, WeChat, from American app stores supplied by Apple and Google. Distribution, updates, and maintenance will be expelled for purchase unless the Trump administration, TikTok, and Oracle can close a deal by September 20th. Commerce Secretary, Wilbur Ross, told Bloomberg WeChat would be shut down for practical purposes, but Americans could still use the app for payments in China and talk to loved ones overseas. He added TikTok’s official shut down is scheduled after November 12th if the deal with Oracle falls through.  

On Monday, September 21st, 2020, President Trump announced his approval of the deal between Oracle and TikTok. As a result of the ongoing proposal, Oracle and Walmart will share a 20% stake in TikTok Global, a new company headquartered in the United States. ByteDance will own 80% of TikTok Global and allow Oracle to review its source code. Ceding algorithms and other technologies was not included in the deal. Allowing Oracle to review the source code is still not fool-proof as ByteDance could easily instruct the code to send data back to China in secret. Trump’s approval has postponed the ban for now, but the removal of TikTok through American app stores is still in effect. As relations between the United States and China remain tumultuous, the final outcome of the TikTok debate remains to be seen. 

Expert WitnessIntellectual PropertylegaltechSocial Media

Tinder v. Bumble: Swipe Right for Your Next Patent Infringement Expert Witness

Last Friday I was sitting at my desk trying to find the next topic to blog about. Friday was an incredibly slow news day and nothing had piqued my interest. So I reached out to some lawyer-friends in the LegalMinds Mastermind Group for some ideas. I received a lot of feedback with some really great ideas. However, this Tinder v. Bumble lawsuit sounded like the most fun. A special thanks to patent lawyer, Karima Gulick, for the idea.

In fact, I had not even heard about this lawsuit until Karima mentioned it. It seems that Tinder’s parent company, Match Group (think Match.com), has decided to sue Bumble for patent infringement. For those who haven’t heard of Bumble, it is another popular dating app that allows women to make the first move. It seems they are now using very similar features to Tinder. An article in The Verge described the two patents at issue:

“…one called ‘Matching Process System and Method,’ in which users swipe cards and mutually select one another, as well as ‘Display Screen or Portion Thereof With a Graphical User Interface of a Mobile Device,’ which it describes as an ‘ornamental aspect’ of Tinder’s App. The lawsuit also points to similarities between each companies’ apps, and Bumble’s descriptions of ‘swiping’ run afoul of Tinder’s registered trademarks.”

It seems Tinder is accusing Bumble of infringing on the item that really made Tinder famous (i.e. swiping). Swiping did away with all that scrolling, reading, and learning about a potential romantic interest. Who has time for that? Even if you have time, who wants to do it? Instead, Tinder allowed you to make the important dating decision based on looks and looks alone, if you’re that shallow. It does appear there is a short biography portion some might want to read, but only if the potential match fits your physical requirements per their photo.

“Swipe right” and “swipe left” became a part of our nomenclature, often used outside of dating. I’ve heard comics and late show hosts use the terminology. There is no doubt in my mind, those using the terminology associate it with Tinder. Alas, Bumble decided to use the feature as well. Probably because users liked picking their mates via the swipe method.

There are some further accusations as set forth in this article by Recode, “[Tinder] also claims that early Bumble executives Chris Gulczynski and Sarah Mick, who both previously worked at Tinder, stole ‘confidential information related to proposed Tinder features,’ including the idea for a feature that lets users go back if they accidentally skip someone, according to the suit.” This is important, because when you’re swiping for volume (because it’s a numbers game) and get into a zone you might accidentally eliminate someone you find attractive. You need to undo that ASAP.

Finally, there is the issue of Match/Tinder trying to purchase Bumble last year. They offered $450 million, which was turned down, due to the acrimonious relationship between the two companies. Is Tinder using this case to apply some pressure on Bumble, thereby encouraging a sale? Quite possible.

If the case actually moves ahead and a sale is not negotiated, we can expect to see some expert witness participation. What kind of experts? I wish I could encourage you to swipe right to view them. However, you just have to keep reading!

Intellectual Property / Patent Infringement:

Intellectual property is sort of wide ranging term for expert witnesses. A broad range of expertise fits into the category intellectual property, such as patents, patent infringement, trademarks, trade dress, copyrights, licensing, trade secrets, and more.

In the Tinder v. Bumble issue, it appears they are only suing over a couple of patents and The Verge told us what those patents are. Both patents appear to be connected with the user interface, so I anticipate we will see intellectual property experts with software, programming,  and design engineering backgrounds. There is potential need for electronic engineering expert witnesses, but I think that will be less likely as it doesn’t appear hardware is at issue in this case.

Trademarks:

The lawsuit also claims that Bumble’s use of the word “swiping” infringes on Tinder’s registered trademarks. This legal dictionary from Cornell Law School’s Legal Information Institute describes a trademark as follows, “A trademark is any word, name, symbol, or design, or any combination thereof, used in commerce to identify and distinguish the goods of one manufacturer or seller from those of another and to indicate the source of the goods.”

The Legal Information Institute also tells us that “Two basic requirements must be met for a mark to be eligible for trademark protection: it must be in use in commerce and it must be distinctive.”

As I mentioned above, I knew that “swiping” was something associated with Tinder and I know that Tinder is a subscription based dating service. So, according to this layperson, the mark is being used in commerce and I recognize it as distinctive to Tinder. Now that I’ve made this information public, I cannot imagine Bumble wanting me on the jury. Luckily, the case has been filed in the US District Court in Waco, Texas.

Furthermore, a trademark expert witness retained by Bumble, may be able to provide information about “swiping” that indicates it is not distinctive. In fact, the terminology may be quite prevalent in software uses.

A Similar Matter?

The software matter I equate to this lawsuit would be the “Stories” issue between Snapchat and Instagram. Snapchat was the first social media platform to use the Stories feature, allowing users to post a continuing series of video clips or photos in order to create an ongoing story. Instagram copied it, nearly outright, and even admitted that they took the idea from Snapchat. To my knowledge, this has not resulted in litigation. However, the use of software-based features seem nearly identical and I wouldn’t be surprised to see a patent infringement and trademark dispute between Facebook (Instagram’s parent company) and Snapchat.

Requests:

As I am not practicing in this field, I think it would be great to get some feedback from a some lawyers who regularly deal with patents and trademarks.

I have asked Karima Gulick of Gulick Law and Joey Vitale of Indie Law to provide some real insight, rather than lay punditry, in the matter of Tinder v. Bumble.

UPDATE:

Intellectual property and patent attorney, Karima Gulick, has provided her insight about this case on her blog. Here is her blog post: Tinder v. Bumble: Patent dispute in app dating paradise.

Copyright and trademark attorney, Joey Vitale, has provided his insight about this case on his blog. Here is his blog post: Be careful if you “swipe”: trademark battles in Tinder v. Bumble.

 

 

Expert WitnessLawyerslegaltechSocial Media

Technology and Awareness: How to bridge the access to justice gap?

Our final blog post of 2017 highlighted an upcoming event I’m really enthusiastic about. I’ll be moderating a panel at the ABA-GPSolo/GLSA 2018 Joint Spring Meeting (April 25-28, 2018) in New Orleans. This is the GLSA’s (Group Legal Services Association) annual educational conference.

Why am I so excited? Well, it will be held during Jazz Fest in New Orleans! Why wouldn’t I be excited?!

In all sincerity, I’m thrilled to be sharing the panel with some awesome lawyers. Our group has worked diligently to create a valuable presentation for our audience.

Five individuals, with little prior knowledge of each other, have come together through solid teamwork to create a coherent presentation underlining the obstacles facing client access to justice and some steps to improve access.

What started as “legal technology and the access to justice” has morphed into a topic of technology and awareness building to bridge the access to justice gap.

I can’t wait to meet my teammates and the readers of this blog post in person! Allow me to introduce you to the team:

Sarah Kieny:

Sarah is a shareholder in the Riggs, Abney, Neal, Turpen, Orbison & Lewis law firm and has been with the firm since 1997.  Sarah received her J.D. from Creighton University Law School in 1994, and a BA in Religious Studies in 1991 from Regis College in Denver, Colorado.  Sarah is the firms’ LegalShield Supervising Attorney where she manages LegalShield front line 20+ attorneys and staff in day-to-day operations. She has also spearheaded the firms’ involvement in raising community awareness about the availability of legal services. Sarah has coordinated a quarterly “Law Day” program with Denver’s nonprofit organization, Warren Village, for the specific purpose of offering legal access to single parents who are transitioning to self-support through education, training, and commitment.

Wayne Hassay:

Wayne is the managing partner of Maguire Schneider Hassay, LLP. He joined the firm in 1998, and became a partner in 2004. He has been with the firm almost 20 years, practicing in the areas of personal injury, probate, and collection, plus he lectures regularly on the non-traditional delivery of legal services. His firm services the legal needs of over 36,000 Ohioans as part of legal service plan, LegalShield.

Wayne and I are sort of kindred spirits, although we approach legal technology and access to justice a bit differently, since I don’t practice. In a Law Practice Today article from last year, Wayne stated “Client-facing tech is the norm in so many professions. Can you imagine working with a bank that does not have client-facing technology? No. Yet law lags far behind.” Let’s work to correct this, Wayne!

Kerry Lavelle:

Kerry began his own practice, Lavelle Legal Services, in 1989, focusing primarily on matters of tax law. Today the firm, now known as Lavelle Law, Ltd., has grown to include 22 attorneys with practice groups in tax, business law, commercial real estate, estate planning, criminal law, home health care, small business, gaming law, bankruptcy, corporate formation, family law, litigation, grocery law, employment law, residential real estate, securities, and LGBT law. He is the author of The Business Guide to Law: Creating and Operating a Successful Law Firm, published by the Division. In 2016, Kerry was designated a Top 100 Attorney in Illinois by Super Lawyers. In 2015, his firm was one of 13 law firms nationally to receive the Beacon of Justice Award for pro bono service from the National Legal Aid & Defender Association (NLADA).

Tony Clayton:

Tony is the managing partner of Clayton, Fruge & Ward. He graduated from Southern University’s Law Center in 1991 and was admitted to the Louisiana State Bar that same year. While establishing his private practice, Tony has had the privilege of also being involved in other areas of the legal profession, including District Court Judge for the Louisiana Supreme Court and Special Prosecutor for East Baton Rouge Parish.

Moderator / Panelist, Nick Rishwain:

I am the Vice President of Client Relations & Business Development for Experts.com, an online marketing platform for expert witnesses and consultants. In my free time, I am quite active in social media. In 2015, I founded and co-host a live video vlog, LegalTechLIVE, which advocates for and highlights the advancements in the legal technology sector. Additionally, I co-host SocialChatter, a live, weekly, social media news show.

LawyersLitigationSocial Media

LegalTech and Access to Justice: Panel at The ABA GPSolo/GLSA Spring Meeting

As 2017 comes to an end, I am looking at what we accomplished this year and what is on our “to do a list” for 2018. There is one item I’m very excited about. I’ll be moderating a panel at the ABA-GPSolo/GLSA 2018 Joint Spring Meeting (April 25-28, 2018) in New Orleans. This is the GLSA’s (Group Legal Services Association) annual educational conference.

Legal Technology:

The panel is covering the topics of legal technology (legaltech) and access to justice. Many may wonder why I’m excited about this. If you are not in the legal or legaltech business, I understand the topic may seem dry. I’ve been working in legaltech for nearly 8 years at Experts.com and one of my hobbies includes vlogging about legal technology. I am deeply passionate about the impact of technology on the practice of law and delivery of legal services. In essence, I get to host a panel on a topic that fascinates me.

There are a lot of exciting advancements taking place in legaltech. You may have heard about topics such as artificial intelligence, blockchain, and chatbots. These subjects have been dominating legal news for the last couple of years. The innovations are very cool, at least to an admitted nerd like myself. However, our panel will not be taking a deep dive into these legaltech topics. A friend and colleague, Tom Martin of LawDroid will be at the conference and he’ll be discussing running his practice virtually while vacationing in Europe. I highly recommend chatting with Tom about chatbots and how they can help to run a lean, efficient practice as well as improve access to justice.

Access to Justice:

As much as I’d like to have a more involved discussion about the cutting edge technologies impacting the practice of law, there are less sophisticated, readily accessible technologies that can be employed by lawyers and law firms to improve access to justice. In fact, many of these technologies are already employed by legal practitioners. I’ll be hosting the panel with four actively practicing lawyers, with varying levels of technical aptitude, who are actively improving consumer access to justice.

To learn more about the magnitude of the access to justice problem, I encourage you to visit the US Department of Justice, Office for Access to Justice and this page from the United Nations and the Rule of Law.

Here is a brief breakdown of the items identified by our panel for discussion to improve access to justice within the United States:

  • Cost of legal services
  • Consumer awareness of pro-bono services
  • Time restraints for lawyers
  • Technologies used to improve access to justice

As mentioned above, you and your firm already have access to many of the technologies we’ll be discussing. It is just a matter of how the technology is used to improve consumer access to legal services.

Here are a few of the technologies we will cover:

  • Open source and cloud-based services
  • Mobile technology
  • Social media
  • Prepaid legal services

If you are a solo-practitioner looking to improve client access to justice, what would you want to learn about in this presentation?

To my friends and colleagues in the legaltech space, what other legacy technologies should be covered?

Expert WitnessLivestreamingMarketingSocial Media

Social Media Marketing World 2016 – Lessons Learned for Expert Witnesses

SMMW16Held at the San Diego, California Convention Center on April 17, 18 and 19, 2016, the Social Media Marketing World 2016 (SMMW16) was well-received with over 3,000 participants from around the world. Networking was the name of the game, with recognized brands from Airbus, Allergan, and Amazon to Verizon, Walmart, and World Vision. Platform representatives from Facebook, Google, LinkedIn, Twitter, and many others were front and center imparting valuable social media marketing tactics.

There were a couple of major takeaways from this year’s event. Several things you need to know in order to better promote your expert witness services:

  • You Need To Be On Social Media:

If you are not yet active on social media you are losing precious brand awareness, engagement, and community building opportunities. At a time when consumers and clients are more informed than ever before, you need to have a social presence so your customers can “know, like, and trust you.” Attorneys cannot get to know the real, authentic version of you if you are absent from these platforms.

It is not necessary to be present on every available social media platform. Having a strategy is important. If you are looking to target attorneys, it is important to know how to find them and how to get their attention. We have services to assist you in building your presence on the correct social platforms.The priority is building relationships within your community. If you are regularly creating content and posting it to social media platforms without a community, you may find your content is not being read, watched, shared, or cared about. If you build those relationships and others care about you individually, they will share your content. It was great to see several of those in our live-streaming community in attendance at the #SMMW16, including:

  • Build An Online Community

A community on social media is not terribly different than an offline community. There are leaders, managers, and community members. Usually there is an interconnecting of these individuals for some shared purpose (or shared interest). For example, Experts.com is active in the live video / live-streaming community. There are many active Members in this community and the one thing we all have in common is we participate in creating live video. Although our businesses may be different, we still support others in the live streaming community by sharing their content.

Mitch-and-Jeff

Mitch Jackson, Esq. and Jeff Weinstein, Esq.

The priority is building relationships within your community. If you are regularly creating content and posting it to social media platforms without a community, you may find your content is not being read, watched, shared, or cared about. If you build those relationships and others care about you individually, they will share your content.

Nick and Jeff Weinstein

Ivan Raiklin, Esq. and Nick Rishwain

It was great to see several of those in our live-streaming community in attendance at the #SMMW16. Just a few with whom we were able to spend significant time include:

  • Live Video is the Future of Marketing

Here are just a few facts that were shared by Mike Stelzner, CEO of the Social Media Examiner (host of the conference).

  • 73% of marketers use video in 2016
  • Only 14% of marketers are using Live Video
  • In May of 2015, there were 2 billion videos viewed daily
  • In February of 2016, there were 8 billion videos viewed daily

The following platforms have bet big on live video (a.k.a. live streaming, social video): Facebook, Twitter, YouTube, Snapchat, Blab. Each of these companies has a major live-stream component or is entirely live video as a product/tool. Facebook now gives priority to video content in its news feed. Other content will fall below live video in your news feed.

Live video allows you to increase the” know, like, and trust” factor better than anything else, according to social selling strategist, Kim Garst.

Our friend, Mitch Jackson, Esq., has said he wants to “see and hear the expert” before hiring him or her as an expert witness. Seeing and hearing an expert witness provides endless value to attorneys as they get an idea of how you sound and perform. Live video allows you to do this authentically.

As live video is the future of marketing, we highly recommend getting comfortable with live streaming sooner rather than later. Join Us to become a pioneer in the live video community. Improve your visibility, professionalism, and authenticity with live video marketing. If you do not know how to begin, reach out to us at info@experts.com.

Nick Rishwain, JD.
Vice President of Client Relations, Experts.com.

AdvertisingConsultantsExpert WitnessLawyersLivestreaming

Expert Witnesses Embrace Digital Media Platforms – Interview with Attorney Mitch Jackson

On October 14, 2015, Experts.com interviewed Expert Witness and 2013 California Lawyer of the Year, Mitch Jackson, on the benefits of Expert Witnesses embracing digital media to promote their services. To watch the interview, click the link below.

Mitch Jackson was admitted to the California Bar in 1986 and immediately opened up his own practice representing victims of personal injury and wrongful death. In 2009, Mr. Jackson was named Orange County “Trial Lawyer of the Year,” by the Orange County Trial Lawyers Association. In 2013, he received the California Lawyer Attorneys of the Year (CLAY) Award for litigation. According to California Lawyer Magazine, the CLAY Award recognizes attorneys who have changed the law, substantially influenced public policy or the profession, or achieved a remarkable victory for a client or for the public and have made a profound impact on the law. Mr. Jackson is also an expert witness in legal malpractice matters.

Mr. Jackson is an active social media influencer with a strong presence on Twitter, Facebook, Periscope, Blab, Instagram, among others. In addition to his legal practice, Mitch Jackson maintains several websites promoting: livestreaming, communication, Rotary service, and, most importantly, “Being Human.” To learn more about Mitch Jackson, his practice, and his social influence, visit the following sites:

http://jacksonandwilson.com/
http://streaminglawyer.com
http://human.social/

Experts.com was established to allow professionals a platform to showcase their areas of Expertise. Since 1994, we have been providing millions of users worldwide with access to specialized knowledge. We believe our members should have control over monetizing their specialized knowledge and expertise. In this day in age of high technology, there is no need for a broker or middle man to mark up fees or market your expertise. Put your best foot forward with Experts.com.