This is gold:
Husband's attorney, Diana Lynch, relies on four cases in this division, two of which appear to be fictitious, possibly “hallucinations” made up by generative-artificial intelligence (“AI”),2 and the other two have nothing to do with the proposition stated in the Brief.3Undeterred by Wife's argument that the order (which appears to have been prepared by Husband's attorney, Diana Lynch) is “void on its face” because it relies on two non-existent cases, Husband cites to 11 additional cites in response that are either hallucinated or have nothing to do with the propositions for which they are cited. Appellee's Brief further adds insult to injury by requesting “Attorney's Fees on Appeal” and supports this “request”4 with one of the new hallucinated cases.
Apparently, a lawyer used chat to do her research and it hallucinated some case law and the judge didn't catch it, and ruled in their favor based on the nonexistent cases. Appeal figured it out.
This makes me think that an occasionally unreliable chat is actually better for humans than a mostly reliable chat. If we always have a certain level of anxiety about the answers chat gives us, it will be much more difficult for people to sway us with unnoticed nudges via our llm interactions.