Mike Lindell's lawyers used AI to write brief–judge finds nearly 30 mistakes

arstechnica.com

178 points by abacussh a day ago


seanhunter - a day ago

What I find really strange about this is I use AI a lot as a “smart friend” to work through explanations of things I find difficult etc and I am currently preparing for some exams so I will often give the AI a document and ask for some supporting resources to take the subject further and it almost always produces something that is plausibly close to a real thing but wrong in specifics. As in when you ask for a reference it is almost invariably a hallucination. So it just amazes me that anyone would just stick that in a brief and ship it without checking it even more than they would check the work of a human underling (which they should obviously also check for something this important).

For example, yesterday I got a list of some study resources for abstract algebra. Claude referred me to a series by Benedict Gross (Which is excellent btw). It gave me a line to harvard’s website but it was a 404 and it was only with further searching that I found the real thing. It also suggested a youtube playlist by Socratica (again this exists but the url was wrong) and one by Michael Penn (same deal).

Literally every reference was almost right but actually wrong. How does anyone have the confidence to ship a legal brief that an AI produced without checking it thoroughly?

Etheryte - a day ago

> Wang ordered attorneys Christopher Kachouroff and Jennifer DeMaster to show cause as to why the court should not sanction the defendants, law firm, and individual attorneys. Kachouroff and DeMaster also have to explain why they should not be referred to disciplinary proceedings for violations of the rules of professional conduct.

Glad to see that this is the outcome. Similar to bribes and other similar issues, the hammer has to be big and heavy so that people stop considering this as an option.

tzs - a day ago

> "[T]he Court identified nearly thirty defective citations in the Opposition. These defects include but are not limited to misquotes of cited cases; misrepresentations of principles of law associated with cited cases, including discussions of legal principles that simply do not appear within such decisions; misstatements regarding whether case law originated from a binding authority such as the United States Court of Appeals for the Tenth Circuit; misattributions of case law to this District; and most egregiously, citation of cases that do not exist," US District Judge Nina Wang wrote in an order to show cause Wednesday

30+ years ago when I was in law school [1] I would practice legal research by debunking sovereign citizen and related claims on Usenet. The errors listed above are pretty much a catalog of common sovereign citizen legal research errors.

Just add something about gold fringed flags and Admiralty jurisdiction and it would be nearly complete.

The sovereign citizen documents I debunked were usually not written by lawyers. At best the only legal experience the authors usually had was as defendants who had represented themselves and lost.

Even they usually managed to only get a couple major errors per document. That these lawyers managed to get such a range of errors in one filing is impressive.

[1] I am not a lawyer. Do not take anything I write as legal advice. Near the end of law school I decided I'd rather be a programmer with a good knowledge of law than a lawyer with a good knowledge of programming and went back to software.

victorbjorklund - a day ago

I dont understand how a lawyer can use AI like this and not just spend the little time required to check that the citations actually exist.

rsynnott - a day ago

What is it with the American far-right and hiring the most _incompetent possible lawyers_? Like, between this and Giuliani...

Balgair - a day ago

Wait until you guys hear about how they used AI in the California bar exam.

https://www.sfgate.com/bayarea/article/controversy-californi...

The lawyer jokes aren't funny anymore...

philipwhiuk - a day ago

This is just Mata v. Avianca again

ForOldHack - a day ago

"You IDIOT!!! And you have IDIOT lawyers too." There. I said it. It needed to be said and I feel so much better.

LadyCailin - a day ago

Everything about this entire situation is comically dumb, but shows how far the US has degraded, that this is meaningful news. If this were a fiction book, people would dismiss it as being lazy writing - an ultra conservative CEO of a pillow company spreads voting conspiracies leading to a lawsuit in which they hire lawyers that risk losing the case because they relied on AI.

yapyap - a day ago

That’s so stupid, he almost deserves to lose the case just for that

cratermoon - a day ago

dupe https://news.ycombinator.com/item?id=43799823

You missed this one, gnabgib

emorning3 - a day ago

Is it possible that these AI models will tell someone what they want to hear rather than the truth?

I mean, that's always been tech's modus operandi....

tiahura - a day ago

As an attorney, I’ve found that this isn’t the issue it was a year ago.

1. Use reasoning models and include in the prompt to check the cited cases and verify holdings. 2. Take the draft, run it through ChatGpt deep research , Gemini deep research and Claude , and tell it to verify holdings.

I still double check, for now, but this is catching every hallucination.

amirmi78 - a day ago

This is incompetent use of AI and the news related to it are becoming tiring. The result is that whenever I talk to some people outside the tech circle they just undeniably believe that AI will never be commonplace in high stakes situations, which is just a rapidly moving bar.