Another set of sanctions issued yesterday against lawyers submitting hallucinations:
The first reason I issue sanctions stems from Mr. Nield's claim of ignorance—he asserts he didn't know the use of AI in general and ChatGPT in particular could result in citations to fake cases. Mr. Nield disputes the court's statement in Wadsworth v. Walmart Inc. (D. Wyo. 2025) that it is "well-known in the legal community that AI resources generate fake cases." Indeed, Mr. Nield aggressively chides that assertion, positing that "in making that statement, the Wadsworth court cited no study, law school journal article, survey of attorneys, or any source to support this blanket conclusion."
...
Counsel's professed ignorance of the dangers of using ChatGPT for legal research without checking the results is in some sense irrelevant. Lawyers have ethical obligations not only to review whatever cases they cite (regardless of where they pulled them from), but to understand developments in technology germane to their practice.
What I mostly wonder: how will this liability be slapped upstream - if at all?