Tag: artificial intelligence

Artificial Intelligence (AI)Expert Witness Testimony

Expert Witness Testimony: The Risky Role of AI

Introduction

A recent New York case shed light on the growing complexities of AI in expert witness testimony. An expert witness used an AI agent to estimate damages in a real estate dispute—only to have the court challenge the reliability of both the AI and the damages calculations in the judge’s order.

The court challenged the reliability of both the AI and the damages calculations in the judge’s order, issuing a stern warning about the need for disclosure before any AI-generated evidence is admitted. This incident serves as a cautionary tale for expert witnesses and attorneys navigating the uncharted waters of AI in legal proceedings.

I learned about this case from an Ars Technica article by Ashley Belanger. I highly recommend you read the article, “Expert witness used Copilot to make up fake damages, irking judge,” in addition to my description of the case below..

Trust & Estates Expert, Not Real Estate:

The expert in this case specializes in trust and estate litigation. More specifically, the trust and estates expert is knowledgeable in fiduciary duties, trustee standards of care, and prudent investor rules. The expert witness used Microsoft’s Copilot AI chatbot to assist in the damage calculations. This poor decision backfired when the court scrutinized the reliability of the AI-generated evidence and the expert’s methodology.

The judge’s order emphasized AI’s evolution in legal proceedings along with its inherent reliability issues. The court made it clear that before any AI-generated evidence can be admitted, full disclosure is critical. This incident is not an isolated one; there have been several notable cases where attorneys have submitted briefs citing non-existent case law generated by AI. These incorrect or wrongly generated case law citations are what you’ve heard described as hallucinations. It was only a matter of time before an expert witness made a similar misstep.

Keep within Your Expertise:

While the use of AI was problematic and rightly admonished by the court, it was not the primary issue in this case. The expert’s lack of relevant expertise in real estate appraisal, valuation, and damages was the root of the problem. As an expert witness, it is crucial to never opine outside your area of expertise. If a case falls outside your skillset, it is your responsibility to inform retaining counsel and either decline the assignment or limit your involvement to those aspects that are firmly within your expertise.

A close friend of mine and Experts.com, Mitch Jackson, Esq., seasoned lawyer and expert, recently declined an expert witness assignment we recently presented to him. He declined it for this very reason. It is better to turn down a case than to risk your reputation and the integrity of the proceedings by venturing into unfamiliar and inexpert territory.

The Dangers of Over-Reliance on AI:

In this case, the expert lacked the necessary expertise and relied on Microsoft Copilot to “cross-check” his calculations. Unfortunately, the AI-generated numbers were incorrect, further compounding the issue. This serves as a stark reminder that AI tools, while powerful, are not infallible and should be used with extreme caution.

If you don’t have relevant expertise, don’t know the proper formulas, and can’t verify the outputs, do not use the AI tool. Just as we used to check our math homework manually, the same principle applies to AI-generated results. If you cannot verify the accuracy of the output, it has no place in your expert report, deposition, or testimony.

The Reputational Risk:

Stories like this one, covered by reputable tech publications like Ars Technica, can have a devastating impact on an expert witness’s practice and reputation. In the digital age, news spreads quickly, and a single misstep can haunt an expert for years to come. It is crucial for experts to prioritize their credibility and integrity above all else, as their livelihood depends on it.

Artificial Intelligence - Expert Witness Testimony

Best Practices for Expert Witnesses

To avoid falling into the same trap as the expert in this case, follow these best practices:

  1. Stay within your area of expertise: Only accept cases that fall squarely within your field of specialization.
  2. Verify your results: Always double-check your work and ensure that you can support your findings.
  3. Educate yourself on AI: As AI becomes more prevalent in legal proceedings, experts must stay informed about its capabilities and limitations.
  4. Prioritize your reputation: Your credibility is your most valuable asset as an expert witness. Protect it at all costs.

Best Practices for Attorneys:

  1. Vet experts thoroughly: Ensure that the expert has the necessary qualifications and expertise for all aspects of your case.
  2. Communicate expectations clearly: Discuss the scope of the expert’s work and inquire as to appropriate methods and tools.
  3. Review expert reports carefully: Scrutinize the expert’s findings and methodology, asking questions and raising concerns as needed.
  4. Stay informed about AI developments: Keep abreast of the latest advancements and legal precedents involving AI in expert testimony.

In Summation:

The case above serves as a stark reminder or a bold warning of the challenges and risks associated with uninformed usage of artificial intelligence in your expert witness practice. As AI continues to evolve and become more integrated into the legal practice and court proceedings, it is crucial for expert witnesses and attorneys to approach it with caution and diligence.

The Fun Stuff:

This article is an elaboration of a LinkedIn post I wrote approximately 1 month ago. To really bring the experience full circle, I pumped the text of the LinkedIn post into my chosen AI chatbot “LawDroid CoPilot.” So what you read above is a human-reviewed article written by AI. I did check my outputs, unlike the expert in the case above. Also, since it was my post to begin with, I was familiar enough with the content to be able to check my own work. Also, I had a teammate peer review before posting.

Now, I want you to be the judge. If you notice any glaring errors or mistakes in the article, please bring them to my attention so that we use this as a fun activity to improve our AI-assisted outputs. Hint, there are some phrases that are used more than once!


DISCLOSURE: LawDroid is a company owned and operated by my friend and colleague Thomas G. Martin. I’ve known Tom for nearly a decade through our efforts in the legal technology space. I just like to support my friends in their endeavors. The LawDroid CoPilot product was named CoPilot before Microsoft’s CoPilot.

EngineeringExpert WitnessInsurancelegaltech

Robot Rights and Liability: Do they need legal rights? Here’s what one expert witness has to say…

Have you been following the advancements in artificial intelligence and robotics? There are some really fascinating developments in the fields. Just this week I’ve read about artificially intelligent systems used to identify people likely to commit a crime (before it happens); robotics systems being used in construction; unmanned aerial vehicles; self-driving cars; and, of course, it seems a week cannot go by without a new headline about sex robots.

Last Friday, I found some news stories that were really interesting. It appears a 2017 report from the European Commission had “a paragraph of text buried deep in a European Parliament report, that advised creating a ‘legal status for robots,'” according to this article from The Daily Mail.

I found this quite fascinating and had to dig deeper. Why would we need to develop a legal status for robots? What would be the point? An article in Futurism stated, “If a robot, acting autonomously injures or otherwise wrongs a human, who will be held responsible? Some European lawmakers think that the best way to resolve this question will be to give robots ‘electronic personalities,’ a form of legal person-hood.”

To me, there is a simple answer to this topic. The owner and/or the manufacturer would be held liable. Why would society need something beyond existing negligence, product liability, and consumer protection laws?

According to the report, the European Commission does not want to give robots legal status equal to humans. Rather, they want to give them a status similar to corporations. The concern doesn’t seem to apply to your automation-style robots, but rather those capable of self-learning.

I contend we do not need new theories of liability to address this issue. It should be handled just like owning an automobile. As the owner of a car, I must have it insured. Insurance covers personal injury and property damage caused by the vehicle if I am driving it or if another driver is covered by my policy. If the vehicle malfunctions and causes damage due to a manufacturing, design, or warning defect, then I sue the manufacturer (or another injured party may sue the manufacturer). As such, owner and manufacturer are the responsible parties. My automobile doesn’t require its own legal status.

A robot, sentient or not, does not require its own legal status. It can be insured just like an automobile and the owner should be responsible for insuring the equipment. Furthermore, if it malfunctions and causes harm, the manufacturer can be held liable for any product defects.

I have asked for some input on this topic from a couple of our Experts.com members. At the time of this writing we have received a response from one expert. Dr. Harry Direen, PhD, PE, has a wide variety of expertise including electronic systems, control systems, robotics, software, signal processing, UAV’s/drones, and more. I encourage you to check out his company DireenTech.

Several questions were posited to Dr. Direen. Please see the questions and answers below.

What the expert has to say:

Me: Do you see any need for creating a legal status for robots?

Dr. Direen: No… robots are not humans, they are machines.  Despite the hype, I do not believe robot technology is anywhere near thinking on their own or being responsible for their actions.

Me: Are there any positive reasons to create a legal status for robots?

Dr. Direen: No, not that I know of.

Me: Are there any negatives you can think of in creating a legal status?

Dr. Direen: Yes, as a society we start legally blurring the lines between humans and the machines we create.  I don’t believe we elevate humans in the process, but just the opposite.  We advance the myth that humans are little more than carbon based machines with no more value than the machines we create rather than highly valued creations of our Creator.

Me: Is there any reason damage caused by robots cannot be addressed by existing legal principles such as product liability (manufacturing, design, or warning defects)?

Dr. Direen: No. Giving robots legal status would simply be an excuse to divorce engineers, designers, and manufactures from the responsibility of their products.

Me: If a robot were to fail and cause personal or property damage, would a forensic investigation apply the same principles as any other failure analysis investigation?

Dr. Direen: Yes, a robot is just a piece of technology like any other.

So there you have it. Dr. Direen and I seem to be in agreement. Existing legal and investigatory principles should apply to robots. There is no need to provide additional legal protections to machinery.

What do you think? Feel free to comment below and let me know your thoughts. It is a fascinating topic. Robotics is a field where I anticipate a great deal of future litigation. As the topic evolves, I’m certain we’ll be discussing it in greater depth.