LLMs in law of the day
2025-12-22 19:15:27.683992+01 by Dan Lyke 1 comments
As of March 2025, Ms. Watson was on notice of her mistakes when an opposing attorney informed her directly that she had submitted a brief that contained misrepresentations of law. She was apparently then given an opportunity to fix the issue without consequence. Instead of learning from her mistake, she failed to change her ways and continued the same practice of not verifying AI outputonly then, her conduct additionally violated the Firms policy prohibiting use of external AI tools.
As Eric Goldman @ericgoldman.bsky.social summarized
An attorney couldn't stop using Grok (?!) to help draft filings, producing "a flood of tainted filings" & apparently triggering the implosion of a law firm & 3 lawyers' careers 🤖😵 The court called her misconduct "particularly egregious & prolific"
and Mike Masnick @mmasnick.bsky.social observed:
Already unacceptable to use LLMs to draft filings and even worse, if you do, not to have checked the citations. But if you ARE going to do that, why of all LLMs out there would you use *GROK*?
And elsewhere: As more lawyers fall for AI hallucinations, ChatGPT says: Check my work, same article republished as How AI-driven hallucinatory filings are impacting Arizona courts
The AI Hallucination Cases database maintained by Damien Charlotin, a researcher at HEC Paris, a leading business school in France identifies a half-dozen federal court filings in Arizona since September 2024 that include fabricated material from ChatGPT or another generative AI tool.
Hopefully we'll start to see some real penalties for lawyers who outsource their work to the plausible bullshit generators.