pull down to refresh

There have been many comments in the last year about the potential dangers of artificial intelligence, from such AI luminaries as Elon Musk, Yoshua Bengio, Geoffrey Hinton, Yann LeCun, Gary Marcus and others. But they might not be the right people to listen to in this regard, because the threats of AI are fundamentally political. Most scientists and technical experts, however intelligent, do not have training in politics. They generally do not have the mindset to think about politics, with the exception of the regulatory impact to their sector. Nobody expects an inventor to grasp the political and social implications of his invention.
The Blind Spot of AI Threats
This explains why these AI experts usually make rather naïve and unimaginative comments regarding the threats of AI; such as “we need to urge companies to pause AI,” “the government definitely needs to involved,” “humans can hurt others with AI,” we don’t want “AI to fall into the wrong hands,” because “bad actors” could use AI, etc. Also, sometimes the potential threats of AI seem to be minimized and sometimes exaggerated. What all these AI threat assessments have in common is that they never recognize the “bad actor” with the worst record of all: the state.
This is clearly a blind spot. For these AI scientists the fundamental distinction between state and society is non-existent; it’s always a collective “we” that needs to manage the potential threats of AI. This is precisely the warning that Murray Rothbard expressed so clearly in Anatomy of the State (1974): “With the rise of democracy, the identification of the State with society has been redoubled… The useful collective term “we” has enabled an ideological camouflage to be thrown over the reality of political life.”
Though it is difficult to distinguish the state from society in this age of statist interventionism and crony capitalism, it is essential to do so. The state, according to the standard Weberian definition, is “a human community that (successfully) claims the monopoly of the legitimate use of physical force within a given territory.” The state is, thus, by its very nature radically different from the rest of society. As Ludwig von Mises warned in Liberty and Property: “Government is essentially the negation of liberty.” In other words, freedom suffers when state coercion increases. Though crony corporate power can influence government in order to get preferential treatment when the rule of law can be bent (as if often can), it is clear who holds the reins. It is necessary to abandon the myth of the “benevolent state.”
Seen in this light, for all new technology it is necessary to ask to what extent the state controls this technology and its development. In this respect, the record of AI is poor, since most major AI players (like Google, Microsoft, OpenAI, Meta, Anthropic, etc.), their founders, and their core technologies have been supported since their inception in important ways by US government funding, research grants, and infrastructure. DARPA (Defense Advanced Research Projects Agency) and the NSF (National Science Foundation) funded the early research that made neural networks viable (i.e., the core technology for all major AI labs today).
This evolution is not in the least surprising, since the state naturally tries to use all possible means in order to maintain and expand its power. Rothbard again: “What the State fears above all, of course, is any fundamental threat to its own power and its own existence.” Thus, the threats of AI should be seen from two sides. On the one hand, the state can actively use AI to enhance its power and its control over society (as per above), but on the other hand, AI could also represent a challenge for the state, in empowering society both economically and politically.
Will AI Tilt the Balance of Power?
The threat of AI should be assessed, therefore, in terms of the potential impact it can have on the uncertain balance of power between state and society, or to express it more sociologically, between the ruling minority and the ruled majority. This relationship depends on who benefits most from new instruments of power, such as the printing press, modern banking, television, internet, social media, and artificial intelligence. In some cases, the state used these tools to enhance its control, but some of them may empower society. For instance, television was a medium that arguably strengthened the position of the ruling minority, while social media is currently enhancing the majority’s political influence at the expense of the ruling minority. The same question, therefore, concerns AI: will AI empower the state at the expense of society, or vice versa?
As seen above, the state got involved in AI long ago, already at the theoretical and inception stage. Today, fake libertarian Peter Thiel’s Palantir is providing AI analytics software to US government agencies to enhance their power of surveillance and control of the population by building a centralized, national citizen database (including the nightmarish possibility of “predictive policing”). Anthropic is also teaming up with Palantir and Amazon Web Services to provide US intelligence and defense agencies access to its AI models. And Meta will make its generative AI models available to the US government. It is true that such initiatives might, in theory, make state bureaucracy more efficient, but this might only increase the threat to individual freedom. Worryingly, this development is considered “normal” and not raising any eyebrows among AI industry journalists and experts.
From the point of view of society, AI will eventually lead to radical corporate changes and productivity increases, far beyond the internet’s information revolution. The political consequences could be significant, since AI can give each individual a personal research assistant and provide a simpler access to knowledge even in fields with gatekeepers. Routine tasks can be taken over by AI, freeing up time for higher-value tasks including political engagement. For instance, AI can make it easier to understand and check government activity, such as summarizing legislation in plain language, analyzing budgets and spending data, fact-checking claims in real time; thereby reducing the knowledge gap between governments and ordinary citizens.
Of course, this increased political empowerment of society could be stymied if access to AI is conditioned. If the state keeps the upper hand in AI, it could weaken dissidents and discredit independent journalists who use AI by surveillance, manipulation, or worse, in particular where the state feels only loosely bound by its constitutional limitations. This is unfortunately the case not only in the US but also with most states and supranational organizations.
The future of AI—such as AGI, agentic AI and physical AI—is only going to make this discussion of AI threats more important. These evolutions will enhance the possibility for rights violations by the state, but also increase the opportunities and possible countermeasures at individual and community level. A lot could depend on whether the numerous AI functions of the future will be mostly open, decentralized, and encrypted. This future is still uncertain, but the political framework presented here arguably remains valid.
The political stakes involved with AI are far more consequential than those data scientists developing AI seem to recognize. The threats of AI are consistent with the threats that all new technologies represent if they are used nefariously by the state. It is essential, therefore, for the public not only to learn about AI and embrace its potential, but also to see it in the larger context of the political struggle for freedom.
It seems to me that the state is already putting its claws into the AI process with the new rules, regulations and laws it is passing. Of course, the state can use AI any way it would wish, but the serfs and plebs cannot use it unless they follow the rules, regs and laws to eliminate the most efficient and knowledgeable use of it. Yes, the state is once again the gatekeeper if you are using AI, so just don’t use it for anything the state finds undesirable. Do that without AI or find an AI that is useful for you.
the serfs and plebs cannot use it unless they follow the rules, regs and laws to eliminate the most efficient and knowledgeable use of it.
Another thought occurred to me: if people stop using the relatively uncensored web and start using chatbots exclusively, and government regulates alignment 1 or, per the new Trump action plan, source data, then this offers politicians, spooks and other power figures a prime opportunity to rewrite history once more.
Overt censoring like DeepSeek clumsily filtering out some "undesirable misinformation" from the CCP viewpoint at runtime is only the surface. Bias can be built in eloquently by simply not feeding the LLMs any data deemed undesirable. The only defense against it I can think of would be building our own open source data repositories for ingestion.

Footnotes

reply
Yes, alignment. I really don’t care what they are willing to call it, it is all the same. They are taking the totalitarian approach to information, forgetting that the truth will set you free. Oooopps… they don’t want free or freedom, they want their leash on every neck with the concomitant collar around every neck. Slavery and serfdom is their goal and by God, they will get it no matter how.
reply
20 sats \ 3 replies \ @optimism 16h
So how do we resist?
reply
Are you giving any kind of consent? How about implicit consent from silence? Just say, “NO”. Non-consent by lots of people will stop what they are doing. Have you been practicing your armed self-defense regularly. That is how you say no and make it stick.
reply
30 sats \ 1 reply \ @optimism 13h
Me? No. Explicit non-consent. Active discouragement and active countermeasures. But my influence is... limited.
But that's not enough because I'm just one dude, and as has been pointed out to me, I and people like me that opted out, don't scale. Assuming that this is true, my game should be (and is) enabling more than activism. So I'm basically looking into what's missing, and what can be improved.
reply
You can encourage other people to withhold their consent. Other than that, you are doing what you have to do. More and more people have to learn that they can withhold consent.