BEFORE YOU USE ANY “AI” FOR LEGAL WORK DO NOT
- David N. Damick

- 5 days ago
- 3 min read
If you have a legal question, please talk to us. If you have a medical question, please look for some medically trained person. Why would we say this, when our many decades of work have been dedicated to rooting out bad medical and bad legal work? Because the “AI” programs available to the public are guessing at answers. And they are good at pretending to know things they do not know. Even professionals get fooled. The medical errors are in a separate blog. Here are a few of the many examples of attorneys being misled by public “AI” programs:
The Most Common Fatal Error- Fake Information
1. A Massachusetts lawyer cited cases that don’t exist - “fictitious cases” - to the Court and was caught because the lawyer on the other side found it. So many of these may get by because the judges are very busy and they have to trust the honesty of attorneys, but that is why it is called “an adversarial system” - attorneys are trained to check the honesty of the other side. In Smith v Farwell, the court heavily fined the attorney for filing three pleadings with citations that did not exist- all created out of the imagination of an AI system.
2. Almost famous by now is the 2023 New York case Mata v Avianca, Inc., where a lawyer relied on ChatGPT which put completely made-up authorities into his pleadings, resulting in serious sanctions to the lawyers and their law firm. Lawyers have a legal duty to read and verify the cases they cite to a court, but can a non-lawyer even know what is a real case or not?
3. The firm of Morgan & Morgan, whose advertising promotes them as the “largest plaintiff’s law firm in the world,” were sanctioned thousands of dollars in 2025 for citing 8 cases that an AI program had “hallucinated” - that is, faked, - to the federal court in Wyoming. That is, lawyers at large firms as well as small ones can misuse ChatGPT.
4. A California attorney was fined $10,000 for filing an appeal full of fake quotations created by ChatGPT. The attorney apparently told the court that he used the AI program after the company that markets it advertised that it was capable of passing the bar exam. This is not amusing - the advertising that OpenAI and Google and others send out implies that these programs are as good or better than trained professionals. Faking it is not the same as doing the work. It just isn’t.
5. A Texas lawyer filed fake “hallucinated legal citations” in a brief submitted to a Kansas federal court, and the whole team of lawyers that signed on to the pleading was brought before the court in a contempt hearing. Among the problems created by “AI” were:
- Citing a nonexistent lawsuit against the city of Topeka.
- Using made-up quotes from decisions;
- Lying about what real cases actually said;
“I cannot express how sorry I am for this set of mistakes,” said the lawyer who used ChatGPT.
ALL OVER THE USA
One website has assembled over 550 cases where AI programs have inserted fabricated information, false quotes, or misrepresentations in actual lawsuits. That is, over 550 times where this has been caught in decisions. I am giving the link to this website so you can check my reference to it, but I frankly state that I have not followed up on the citations there to verify each of them. https://www.damiencharlotin.com/hallucinations/. In fact, this website refers to the problem all over the world. But in the USA, look at how many pro-se litigants have been misled.
Our office only uses proven software that learns and is confined to the area of law or science that we investigate. Every authority is verified, not only because it is the right thing to do, but because we know that even a solid case can be weakened or destroyed by false information. Its our job to catch that.


Comments