pull down to refresh

THE COURT: Was this motion generated by generative artificial intelligence? MR. KACHOUROFF: Not initially. Initially, I did an outline for myself, and I drafted a motion, and then we ran it through AI. And I -- THE COURT: And did you double-check any of these citations once it was run through artificial intelligence? MR. KACHOUROFF: Your Honor, I personally did not check it. I am responsible for it not being checked. THE COURT: And you understood, as an officer of the court, pursuant to Rule 11 -- MR. KACHOUROFF: I did, Your Honor. THE COURT: -- if you’re going to use generative artificial intelligence that that did not excuse you from the obligations of Rule 11? MR. KACHOUROFF: Absolutely not. Absolutely not. THE COURT: You understood that, correct? MR. KACHOUROFF: Yes, I did, Your Honor. THE COURT: And that doesn’t seem to have happened here, does it? MR. KACHOUROFF: No, Your Honor.
Somehow, this feels like such a primary school moment with the teacher when a kid got caught cheating... interesting times?
What is the crime or transgression?
Sounds like judicial activism
reply
According to Rule 11, one shall not make up laws.
reply
Can you provide citation? Is it in the court listener doc?
reply
I linked it above in the text?
reply
I consulted copilot...
The short answer is: Rule 11 doesn’t directly apply to AI or AGI, but it does apply to the humans who use them in court filings.

⚖️ Rule 11’s Scope

Rule 11 of the Federal Rules of Civil Procedure governs the conduct of attorneys and unrepresented parties who submit documents to federal courts. It requires that filings:
  • Are not for improper purposes (like harassment or delay)
  • Are legally and factually grounded
  • Are signed by a responsible human
Since AI or AGI systems aren’t legal persons and can’t sign pleadings, they aren’t directly subject to Rule 11. But if a lawyer or party uses AI to draft a filing—say, to generate case law or arguments—they’re still personally responsible for ensuring the content complies with Rule 11.

🤖 AI in the Crosshairs

Recent cases have shown how this plays out:
  • In Mata v. Avianca, attorneys used ChatGPT to draft a brief that cited fictitious cases. The court sanctioned them under Rule 11 for failing to verify the content.
  • This has led some judges to issue standing orders requiring disclosure of AI use in filings or even banning it outright.

🧠 What About AGI?

If we ever reach a point where AGI can autonomously draft and file court documents, the legal system would need to evolve. For now, humans remain the accountable agents under Rule 11.
If you're thinking about how this intersects with broader legal ethics or policy, I’d be happy to dive deeper. Want to explore how courts are adapting to AI use more broadly?
reply
Question for ya: if you have the link to both the court document AND the relevant text that the judge applied, why would you ask autocorrect?