pull down to refresh

The first half of this article is interesting, before it swerves into caustic old man voice ("But a still graver scandal of AI — like its hydra-head sibling, cryptocurrency — is the technology’s colossal wastefulness. The untold billions firehosed by investors into its development; the water-guzzling data centers draining the parched exurbs of Phoenix and Dallas; the yeti-size carbon footprint of the sector as a whole — and for what?") There''s also this titular banger:

"AI-made material is itself a waste product: flimsy, shoddy, disposable, a single-use plastic of the mind."

The statement resonates, even if the way they get there in this article doesn't. Dealing with AI-produced writing does feel like using a single-use plastic fork that is too small, too flimsy, and impossible to clean in any meaningful way.
The editors at n+1 are not keen on LLMs. But, as they point out, the world of writing for pay has been in a good deal of turmoil this whole century
Well before the inflection point of OpenAI’s 2022 debut of ChatGPT, freelance writers and adjunct instructors were already beset by declining web traffic, stagnant book sales, the steady siphoning of resources from the humanities, and what was hard not to interpret as a culture-wide devaluation of the written word. Then along came a swarm of free software that promised to produce, in seconds, passable approximations of term papers, literary reviews, lyric essays, Intellectual Situations.
Rather than address this larger issue (how should a writer go about making money?) The editors are not pleased with what they see as the popular writers' response to AI-prevalence (dominance?)
Call the genre the AI-and-I essay. Between April and July, the New Yorker published more than a dozen such pieces: essays about generative AI and the dangers it poses to literacy, education, and human cognition. Each had a searching, plaintive web headline. “Will the Humanities Survive Artificial Intelligence?” asked Princeton historian D. Graham Burnett. “What’s Happening to Reading?” mused the magazine’s prolific pop-psych writer Joshua Rothman, a couple months after also wondering, with rather more dismay, “Why Even Try If You Have AI?” “AI Is Homogenizing Our Thoughts,” declared Kyle Chayka, with the irony of a columnist whose job is to write more or less the same thing every week using his own human mind.
But they do accurately identify something I've been wondering about:
The single, vital aspect of humanity that LLMs can never match, the essays assert again and again, is our imperfection. “AI allows any of us to feel like an expert, but it is risk, doubt, and failure that make us human,” writes Hsu.

"So, inconstancy, fallibility, forgetfulness, suffering, failure — these, apparently, are the unautomatable gifts of our species. Well, sure. To err is human. But does the AI skeptic have nothing else to fall back on than an enumeration of mankind’s shortcomings? Are our worst qualities the best we can do?"

There has been a sneaking trend to make the enclaves of humanity those places we have for most of our history tried to avoid and wall off. It is true that we are occasionally stunningly beautiful in our hideousness. I don't think it serves us well in the long run, though. And the editors at n+1 seem to agree.
Patriots of the humanities, they say, "No!" They bring up the Luddites and some vague Marx things and end with a call to starve the machines:
When we press a chatbot to fine-tune its answers or sift its sources, we serve the machine. With every click and prompt, every system-tweaking inch we give to the spectral author, we help underwrite AI profits (or at least the next round of equity funding; no major AI product has yet come close to actually making money).
Don't publish the AI writing, they say. Don't read it, don't grade it, almost pretend that it doesn't exist (calling to mind a child covering their ears and yelling: "I can't hear you!").
A literature which is made by machines, which are owned by corporations, which are run by sociopaths, can only be a “stereotype” — a simplification, a facsimile, an insult, a fake — of real literature. It should be smashed, and can.
They forget that they are wearing clothes that would have been impossible if the Luddites had their way, living in a society more prosperous than any Luddite could have imagined.
202 sats \ 2 replies \ @optimism 2h
Here's my review before reading yours:

For those of us working in reading- and writing-heavy fields — chiefly media and academia
Sucks if you have such a distorted world view to think that the major source of reading and writing is done by the media and academic industries.
While spoken in the voice of an individual author, each piece in this emergent corpus stages a more collective drama.
Right! Such as this article staging a drama by underestimating the fields in which in reality reading and writing is the primary activity.
The single, vital aspect of humanity that LLMs can never match, the essays assert again and again, is our imperfection.
Funny, as it also cannot match humanity's desire for perfection! Else, why do lawyers get sanctioned, vibe coded "super apps" insta-hacked? This must be because secure apps or non-hallucinated case law are features of imperfection. In fact, maybe the corpus of case law that is deemed to exist and the not-hacked software are hallucinations? How do we know? Human hallucination is probably inferior to that of our ChatGPT overlord too!
According to the logic of market share as social transformation, if you move fast and break enough things, nothing can contain you.
Except, if you don't use these apps that are engineered to defraud you of as much time, money and skill development as possible, and truly make you dependent on that subscription with at least 20 more levels of outrageously priced lies ahead of you, then how exactly is some transparent scammer like Sam Altman going to break you? Just don't give money to scammers. It's really that simple.
With every click and prompt, every system-tweaking inch we give to the spectral author, we help underwrite AI profits (or at least the next round of equity funding; no major AI product has yet come close to actually making money).
I repeat: stop giving money to scammers. Hopefully the wonderful full-time writers and readers of media and academia in the back hear me now.
The way out of AI hell is not to regroup around our treasured flaws and beautiful frailties, but to launch a frontal assault. AI, not the human mind, is the weak, narrow, crude machine.
If all you've ever used was a dumb af chatbot app, then yes, it's a weak, narrow, crude machine. It's not even your machine because you just paying rent and you don't understand what you're doing in the first place. Additionally, since there's a 99.99% probability that you forgot to critically think and ask it about something you are an expert in, or forgot that you asked and that the result was sub-par, you're either ignorant, or living the Gell-Mann day in, day out. That is how sad humanity truly is right now.
Notice the poverty of the latter’s style, the artless syntax and plywood prose, and the shoddiness of its substance: the threadbare platitudes, pat theses, mechanical arguments. And just as important, read to recognize the charm, surprise, and strangeness of the real thing.
Which will simply mean that someone will employ some very targeted reinforcement training and eradicate these anti-patters in favor of the patterns you like. In fact, if you had any idea what you were talking about, you would have fine-tuned Qwen3 or gpt-oss for this yourself, so that you don't have to deal with all the shoddy and artless bot speak.
Until AI systems stop gaining in sophistication [..]
If this ever happens I will just start releasing finetunes that address complaints like in this article, to fuck with the minds of the truly ignorant.
Whatever nuance is needed for its interception, resisting AI’s further creep into intellectual labor will also require blunt-force militancy. The steps are simple. Don’t publish AI bullshit.
But then I challenge thee: don't publish human slop either. JUST SAY NO!
When we use generative AI, we consent to the appropriation of our intellectual property by data scrapers.
Can someone please explain to me why one would publish something if not for that "intellectual property" to spread? If you don't want it to spread, keep it secret, patent it, take it to your grave.
We stuff the pockets of oligarchs with even more money. [..] We hand over our autonomy, at the very moment of emerging American fascism.
I have to say it again: Stop giving money to scammers!
A literature which is made by machines, which are owned by corporations, which are run by sociopaths, can only be a “stereotype” — a simplification, a facsimile, an insult, a fake — of real literature. It should be smashed, and can.
Smashed? Okay, war-mongering boomertard with a lack of imagination. Back to the 1940s with ya.
reply
100 sats \ 1 reply \ @Scoresby OP 2h
I see that, like me, you got a bit riled up by this doozy: they really do come off as people who are displeased that the world is changing and who can't be troubled to change with it.
I did enjoy their jaundiced view of their own profession though.
Also, this is spot on:
Can someone please explain to me why one would publish something if not for that "intellectual property" to spread? If you don't want it to spread, keep it secret, patent it, take it to your grave.
reply
102 sats \ 0 replies \ @optimism 1h
I mean it's fine to be displeased with AI - I am displeased with AI too! But if the media in all their prestigesturbation cannot look further than what scams are on offer by OpenAI, then perhaps the real problem is the low level of deep research these media people are apparently capable of.
reply
Sigh, more Marxist ramblings from people who know how to feel, but don't know how to think.
"Me don't like what going on, so me think we should smash. Ooga booga"
reply