(sfchronicle.com)

In an attempt to salvage his lawsuit against Elon Musk, a man claimed in a recent motion that a San Francisco judge allowed artificial intelligence software to introduce multiple errors into a recent order against him.
Stephen Lam/S.F. ChronicleA man who sued Elon Musk for defamation and securities fraud has alleged that a San Francisco judge allowed artificial intelligence software to introduce multiple errors into a recent order that torpedoed a key motion in the man’s lawsuit.
Pointing to multiple apparent citation errors in a November ruling against him, Aaron Greenspan said the mistakes appeared to be the product of glitchy AI — faulty, machine-generated content sometimes referred to as “hallucinations.”
“Individually, some of these errors are more serious than others,” Greenspan said in a recent motion asking San Francisco Superior Court Judge Joseph Quinn to reconsider his order and revive his lawsuit against Musk. “Combined, they handed the case to the wealthiest man on earth, who himself controls an artificial intelligence platform … and should have lost the motion.”
Article continues below this ad
Quinn already amended his order once, crossing out one major error in the later version after Greenspan pointed it out. It’s unclear whether Greenspan’s arguments about additional errors will convince the judge to further reconsider his ruling.
Greenspan’s allegations highlight an area of urgent concern in the legal field, as courts and policymakers across the country scramble to keep pace with the explosive growth of AI technology. While generative AI has become an invaluable tool for legal research and other functions, it has also produced bogus content that’s been found in hundreds of known court cases, researchers have found.
“It’s difficult enough litigating against the wealthiest man on Earth,” Greenspan said when asked for comment on this story. “One should not also have to fight the court itself.”
Quinn declined to comment, referring questions about his ruling to the court’s communications office. A spokesperson for San Francisco Superior Court declined to comment on the specific case, but provided a copy of the court’s policy on acceptable AI usage, which took effect in August.
Article continues below this ad
San Francisco’s policy allows judges and court staff to use a handful of AI tools including Westlaw Precision, ChatGPT and Gemini, provided that humans still scrutinize the results. It requires judges and staffers to disclose AI assistance only if machine-generated content makes up a “substantial portion” of their work. The policy doesn’t specify what constitutes a substantial portion, and said a definition would be provided at a later date.
Greenspan — a former Tesla short seller and CodeX Fellow at Stanford Law School who runs the legal document website PlainSite — accused Musk, several of Musk’s attorneys and a social media user of defamation and harassment in response to Greenspan’s criticism of the EV giant. Greenspan also alleges Musk committed securities fraud by over-hyping Tesla’s technology and causing its stock to artificially inflate, costing Greenspan money.
Attorneys for the people Greenspan sued described him as a serial litigator and said his claims were meritless, had already been rejected by other courts and were related to speech protected by the First Amendment.
Siding with Musk, Quinn ruled last month that not only would most of Greenspan’s claims be tossed, but that he would also be responsible for legal fees of the Tesla CEO and other defendants.
In his Nov. 13 ruling, Quinn cited several cases meant to support his decision in the case, including a 2020 California appellate decision, Jones v. Goodman. He said that case supported the defense in a procedural dispute over which side filed crucial documents first.
Article continues below this ad
But the appellate court had actually ruled the opposite way. By citing it as he did, Quinn effectively treated the court’s summary of a losing argument as if it were the appellate justices’ decision, which it was not, court filings show. Greenspan said additional errors included invalid citations, references to pages of an opinion that don’t exist, fabricated quotations and notable omissions.
Days after the legal trade publication Above the Law first reported on the Jones citation in Greenspan’s case as a potential AI hallucination, Quinn filed an amended order that crossed out the faulty passage and made additional edits. The ruling remained the same.
In a recent interview with the Chronicle, Above the Law author Joe Patrice, who is also an attorney, said that while human error could have been to blame in this case, the particulars of it bore the hallmarks of AI.
“One thing that AI does is, generally speaking, it doesn’t entirely screw up,” Patrice said. “It just half screws up in ways that are worse.”
Most of the earlier, high-profile AI mistakes that made their way into court documents were outright fabrications — citations of cases that don’t exist. Patrice said the latest brand of bot mistakes are more insidious, partly because they’re harder to spot.
Article continues below this ad
In the ruling against Greenspan, the citation came from a real quote in a real case.
“A human could look at the context and realize that it’s there to set up the inevitable next line, which is, ‘but obviously this is wrong,’” Patrice said. “AI has trouble with that though, because it sees the words, and it sees a paragraph in an opinion.”
Among its other uses, judges throughout the U.S. may also use generative AI to assist in drafting routine orders, summarizing motions and depositions, or determining whether attorneys’ court filings have misstated the law, according to guidelines published this year by a panel of judges and experts in the field.
One of the authors of the report, U.S. Magistrate Judge Allison Goddard, said that while humans have a long history of making their own errors, an AI-generated flub from a judge is a particularly troubling offense.
“I think it would be worse right now because of all the publicity around AI errors,” Goddard, who works in the U.S. District Court of Southern California, said in a recent interview.
Article continues below this ad
“It’s really critical right now that we, as judges, are very careful to not let these errors come in,” she added, noting the public’s record-low confidence levels in the U.S. judicial system. “There’s no margin of error for us.”
In September, the California Judicial Council announced that any state court that wants to implement AI technology in some way must create an AI policy by Dec. 15.
The directive specifically warned of AI hallucinations, requiring that court staff and judges take “reasonable steps” to ensure the accuracy of machine-generated materials. If an error is spotted after the fact, staffers need to take reasonable steps to correct it.
Neither the Judicial Council nor the San Francisco policy describe any potential consequences for failing to follow the guidelines.
There have been more than 600 confirmed instances of AI errors in legal filings across the world since 2023, more than 400 of those coming from the U.S., according to a tracking database run by legal researcher Damien Charlotin.
The majority of the offenders were people representing themselves and lawyers, though six judges also appeared on the list. Their gaffes included false, fabricated or misrepresented quotes, and in every case the ruling was tossed.
Eugene Volokh, a law professor at UCLA who blogs about AI misuse in the legal profession, said while Charlotin’s database was the most thorough of its kind, it also doesn’t include obvious AI hallucinations that aren’t called out by the court. And it can’t list mistakes that weren’t spotted.
“This is happening all the time,” Volokh said in an interview. “My guess is for every one such case where the court mentions it, there are probably 10 where the court doesn’t realize what’s going on, or realizes and doesn’t do anything, or realizes and upbraids the lawyer in a legal argument but doesn’t issue a written opinion.”
In recent months, Volokh said courts have focused more on catching subtle inaccuracies in addition to AI fabrications.
“I’ve been seeing a lot of cases where the court says, ‘These cases are real, but the quotes do not appear in those cases,’” Volokh said.
In one recent extreme example, a Los Angeles-area attorney was fined $10,000 after filing an appeal in which 21 of the 23 quotations in its opening brief were fabrications. The lawyer acknowledged using ChatGPT to “enhance” his brief and admitted he didn’t read the final product before submitting it, according to a September opinion from the Second District Court of Appeal.
“This court detected (and rejected) these particular hallucinations,” the three-judge appellate panel said, concluding with a warning about what they called the “darker consequences” of AI.
“But there are many instances — hopefully not in a judicial setting — where hallucinations are circulated, believed, and become ‘fact’ and ‘law’ in some minds,” they continued. “We all must guard against those instances.”

Megan Cassidy is a crime reporter with The Chronicle, also covering cops, criminal justice issues and mayhem. Previously, Cassidy worked for the Arizona Republic covering Phoenix police, Sheriff Joe Arpaio and desert-area crime and mayhem. She is a two-time graduate of the University of Missouri, and has additionally worked at the Casper Star-Tribune, National Geographic and an online publication in Buenos Aires. Cassidy can be reached on twitter at @meganrcassidy, and will talk about true crime as long as you’ll let her.