I'm reviewing "How AI Destroys Institutions" (#1416703) in a separate post because the original post only linked the abstract.
I will do this by making comparisons to RocksDB, because that is a Facebook implementation on top of Google's stuff, which makes it comparable to genAI: both are BigTech-developed tools, databases, and adhere to GIGO.
The article poses that AI will destroy institutions, focusing mostly on higher education, medicine and law.
It feels to me like their key point that influences most of the paper is:
AI requires the pillaging of personal data and expression, and facilitates the displacement of mental and physical labor.
But that feels pretty much incorrect to me. It is as if they confuse the Silicon Valley way that has been harvesting private data for 2 decades now, with the underlying technology. To criticize a technology because of the way it is applied by Big Tech is deceptive.
Is RockDB evil because it was made by FB and used in the shameless harvesting of data and gaslighting people?
And worse, to say that image recognition is bad because the government abuses it, while the exact same technology is used to cheaply mass produce high quality medicine by Cuban & co, feels like a very Luddite take on the world.
Further on, they say
Its modus operandi is to reproduce existing patterns and amplify biases, polluting
our information ecosystem and marginalizing vulnerable communities.
I can reiterate my point: is AI marginalizing vulnerable communities, or are people doing that in their usage of AI?
And its faux-conscious, declarative and confident prose hides normative judgments behind a Wizard-of-Oz-esque curtain that masks engineered calculations, all the while accelerating the reduction of the human experience to what can be quantified or expressed in a function statement.
This made me laugh. We all know that chatbots are teh sux. They're trained to be like this, because this is how OpenAI feels they can best extract money from the masses. It has literally nothing to do with the actual tech, only with the goals of the guys that raise tens of billions in their bids to make trillions.
Is RocksDB evil because you can put evil data inside there?
The second affordance of institutional doom is that AI systems short-circuit institutional decisionmaking by delegating important moral choices to AI developers. By “short circuit,” we mean cutting out the necessary self-reflection and points of contestation for adaptive and rigorous analysis.
[..]
When AI systems obscure the rules of institutions, the legitimacy of those rules degrades.
They use an example of using genAI for health insurance coverage. Something that totally blows my mind anyway, because all this can be codified to reduce the need for moral choices. Replacing humans with AI-automation rather than making systemic improvements to the health insurance processes is however once again a choice of the insurer.
Is RocksDB evil because you can collect unencrypted PII in it?
AI systems are incapable of intellectual risk because they lack true agency, intrinsic motivation, the ability to experience consequences, and they cannot choose to willingly defy established norms or venture into the unknown for any purpose, including for (r)evolution, resistance, or adventure.
Another funny one, but here they subscribe to the endless bullshit coming out of TechBro asses that hint (but never commit to) some sort of sentience for autocorrect++.
Is RocksDB evil because it doesn't have agency?
Because AI systems undermine expertise, short-circuit decision-making, and isolate humans, they are the perfect machines to destroy social capital. They do this in at least three ways. First, AI degrades general reciprocity expectations because AI is incapable of “paying it forward.”
But, AI isn't another entity. It's a tool. It is the human that doesn't pay forward and that is common now, but per their own quote of Putnam:
"For the first two-thirds of the twentieth century, a powerful tide bore Americans into ever deeper engagement in the life of their communities, but [starting sometime in the 1960s] that tide reversed"
So that's not an AI problem, it is something else.
Is RocksDB evil because all it does is slurp electricity and never gives back?
and they then continue to argue:
But every minute people turn to a machine for warmth, connection, and emotional soothing displaces time they could be spending with humans, developing social bonds, and nourishing common purpose.
Well... maybe you shouldn't do that. If you need warmth, go meet some people.
Is RocksDB evil because it doesn't respond to my emotional needs?
Jill Lepore has detailed Silicon Valley’s fever dreams about outsourcing governance and democratic structure to the AI systems that increasingly dominate our lives into a “Constitutional AI.” The idea, in theory, is that people would come together and agree on a series of rules and structures for the design and deployment of AI that would increasingly determine the critical aspects of all our lives. But that hasn’t happened.
Is RocksDB evil because it doesn't have a baked in constitution?
The situation devolves further as tech CEOs continue to fantasize about
offloading democratic rule onto a bot.
Stop listening to CEOs?
Conclusion: if your source of understanding about AI is solely what Sam Altman says in interviews and scientific papers lamenting what tech is doing, that then you write yet another paper about, maybe you should find something better to do.
RocksDB isn't evil.
A good portion of their argument seems like it would fall apart if they didn't mistake what is happening in an LLM for something like independent human thinking.
The rest falls apart if they expected humans to take responsibility for their own decisions.
It really feels like that, yes. And even though they make the point that the LLM lacks agency and intrinsic motivation, they then use that to make a point against characteristics of LLM-as-an-entity, instead of just blanket eliminating the entire entity thing.
I do get the frustration with all the false promises and other bs coming out of the CEOs and their helpful assistants, and the perpetual funding rounds that keep on amplifying and encouraging the behavior. This bothers me every day. But by narrating their lies as anything other than lies, papers like this just increase the fog that is needed to fool normieland (and apparently, investors in need of a moonshot.)
That's such an untrendy thing to do nowadays.