There are slop cannons and turbo brains and people who don't use AI. And then there are people who have good judgment.
"A weak human player plus machine plus a better process is superior, not only to a very powerful machine, but most remarkably, to a strong human player plus machine plus an inferior process."
There is a story about Kasparov that when he lost to Deep Blue in 1997, he started chess tournaments where human players augmented their game with computer chess engines...and it made some people a lot better but not necessarily the chess masters.
A weak player with a better process beat a strong player with a worse process. Once computers became competent, performance depended less on intelligence and more on orchestration. It was knowing when to trust the machine, when to override it, and how to structure the collaboration so that each side compensated for the other's weaknesses.
Shapiro thinks this comes down to knowing when something is wrong:
catching errors is the entire point of the human half of the equation. That's the job. The AI handles speed, breadth, and pattern recognition. The human handles quality control, contextual judgment, and the final call on what ships and what doesn't.
AI still has trouble really knowing anything in the way humans know things. At least humans who think about what is in front of them.
Humans make mistakes because they are tired, or distracted, or tricked by biases, and for many other reasons...but LLMs make mistakes because they don't know any better. Sure humans do the same, but I think this is the primary failure mode for LLMs.
And so the person who doesn't bother thinking about their LLM's output or who can't evaluate whether it's wrong or right produces slop.
people who discovered ChatGPT, decided it was magic, and now produce astonishing volumes of confident, polished, wrong work. They ship hallucinated legal citations, auto-generated emails that miss the entire point of the conversation, code that compiles but breaks in production, and financial models built on assumptions they never examined because the output looked clean.
Shapiro calls these people Slop Cannons.
The Slop Cannon is the person who uses AI without the judgment to evaluate what it gives them. And the thing that makes this quadrant so dangerous is that the output looks good. AI writes fluently. It structures arguments coherently. It formats documents professionally. The surface-level quality is often excellent. Which means the errors it produces are harder to catch, not easier.
There's more to Shapiro's post and I found the whole thing an interesting read.
I wonder if he followed the Avianca case where the plaintiff got sanctioned for submitting hallucinated case law and submitted it as though it were fact.
Slop Cannons may actually be worse than Dead Weight, because they're more likely to hurt you.
I dunno man, I tried both ChatGPT 5.2 and Claude Opus 4.6 this weekend to turn my PDF paper into a 1-hour long powerpoint presentation. Both did a pretty bad job. I ended up doing it myself. It took me about an hour, but it was better than the crap either of them produced.
Sometimes, I am highly suspicious of the people claiming AI can do this or do that. For the really good AI generated products, I have to wonder how much human iteration went into it, and how much of it was truly one-shot AI work?
But never forget that a weak player with a weak process and no tech is beat by a weak player with a weak process and tech. And that's with 3 dimensions. If we try to apply this to reality, we'll have to compare this over 10,000s of dimensions, and then there suddenly are pretty hard-to-overcome ones, like only being able to be in one place at a time (for now), language skills, cultural barriers and so on.
So thinking that if you're in the top right quadrant on the 2D - or there over 3 or 4 or 20 dimensions - you're it, and if you're in the bottom, you're lost, is deceptive. Self-deceptive. So be curious, but don't buy all the FOMO.
These posts are getting out of hand
https://twiiit.com/zackbshapiro/status/2024126623000220023