Whoever Controls Language Models Controls Politics

(Image: Vecteezy.com, free license.)

Thanks to João Ribeiro and shifter.pt, this text is now also available in Portuguese.

A German version appeared in Neue Zürcher Zeitung. You can also read it here.

The world of artificial intelligence thinks both big and simple at the same time – and has done so from the very beginning. When the workshop that launched the concept as well as the field of “artificial intelligence” was held at Dartmouth College in the summer of 1956, the self-imposed task was to figure out “how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” The expected duration of the project: two months.

Almost seventy years later – on March 28, 2023 – an open letter was published on the website of the longtermist Future of Life Institute, with more than eighteen-thousand signatures to date, including those of Elon Musk and many renowned AI researchers. It calls for a moratorium on the development of large language AIs for at least six months. Systems like the ChatGPT large language model, the authors claim, have now become too powerful and too dangerous. There are “profound risks to society and humanity” posed by “human-competitive AI.” Until there is agreement on how to regulate this complex, all AI labs should refrain from further research.

If Dartmouth spectacularly underestimated how difficult the automation of intelligence would prove to be, the open letter is equally bombastic in drawing the wrong conclusions from the power of current language technology.

First, even today Dartmouth’s goal has not been achieved – for all their successes, GPT-4 et al. do not operate on a human level. Such fantasies are part of the hype around AI, which ultimately serves the companies that develop it. What better proof of a company’s power than its ability to distribute a product that can destroy the world? (Insiders soon speculated that the plan was to continue working in secret for the next six months, and that the real goal was to undermine the industry’s longstanding rule of open research).

Second, and more importantly, the letter also speaks to a disastrous understanding of the interplay between technology and politics, both in terms of its dangers and the means to address them. While the fear that AI-generated text could flood information channels with falsehoods and propaganda is entirely valid, the letter is otherwise driven by apocalyptic fantasies about the total replacement of humans by machines and the “loss of control of our civilization.”

These are the concerns of “longtermists” – a utilitarian school of thought that gives possible future humans an incomparably greater moral weight than actual present ones. Its proponents – to whom Musk, too, feels a close affinity – think in terms of millennia, which is why the threat of a hyperintelligent machine worries them much more than, for example, the immediate damage of climate change, social injustice, or poverty.

But the risk of large language models like ChatGPT is not the technical catastrophe of malicious computers. Much more concretely, language models threaten to become a democratic disaster – through the privatization of language technologies as the future site of political public spheres. This is where politics and civil society must intervene.

The technological development of the last years has shown this: The more data an AI system is fed, the more powerful it becomes – but the more expensive it is to develop. The competition for ever more comprehensive models has led to only a handful of companies remaining in the race. In addition to GPT developer OpenAI, these are Google’s Deepmind and Facebook; smaller non-commercial ventures and universities play virtually no role in achieving current size and performance records.

What we are faced with, then, is a new oligopoly that concentrates language technologies in the hands of a few private companies. These powerful players do not exert dominance over any old product, such as the company YKK, which produces 46% of the world’s zippers. Language models do not hold windbreakers together. Rather, the future of political opinion-forming and deliberation will-be decided in LLMs.

Why this is so can be shown by looking at what until now was seen as the biggest political problem with AI systems – their biases. LLMs model their output on the texts they have been trained on, which is more or less the writing of the entire Internet, including all the biases – the prejudices, racisms, and sexisms –that constitute much of it. Countering this means either censoring the output, as is done (to a degree) with ChatGPT, and thus rendering it potentially unusable. Or, as is also practiced, filtering the data set for its undesirable components – and thus feeding the model with a better world. This is an eminently political decision. Detoxifying AI necessarily involves formulating a social vision.

This does not always have to be a conscious act. Because ChatGPT tends to represent more progressive values, conservative media have been quick to get excited about “woke AI.” In reality, however, this curation is more likely driven by PR considerations: sexist insults, extreme political positions, or racist output simply have a negative impact on tech companies’ profit margins. AI is always ideological, and even attempts to make it neutral – even if only for economic interest – are doomed to fail.

However, decisions about the social vision that language models articulate are in the hands of a few companies that are not subject to democratic control and are accountable to no one but their shareholders. They thus become, to misappropriate a term by philosopher Elizabeth Anderson, “private government.”

Their product is the main resource that makes for a vital democracy: the language to negotiate political alternatives at the only level where this is possible – the political public sphere. Instead of debating what kind of world we want to live in, that decision is already made even before a single word has been exchanged, because the language at one’s disposal has itself already been subjected to a preliminary political decision.

It doesn’t help that such LLMs can of course also be steered toward the right, as computer scientist David Rozado showed when he recently created “RightWingGPT.” Indeed, a future in which a conservative language AI coexists with a progressive one would eliminate the discussion between social groups whose conflicts ideally contribute to the formation of the opinions of an informed public. Instead of exchange, there would only be the reinforcement of already existing opinions; and unlike the much-vaunted echo chambers of social media, it would not even be people who set the parameters of that discussion, but a complex system of natural language processing and profit-driven private corporations.

Thus, in the future, language models themselves may take on the status of a surrogate public sphere. As more and more texts are generated by AI systems – and it is very likely that they will be – the proportion of discourse produced by humans will steadily decline. And because language models, once trained, are difficult to change, and because they infer the future from the past, what Emily Bender et al. has called a “value lock” looms. It means that values become fixed in place due to the system’s inability to change, so that no amount of discussion can lead to a change of opinion; a technologically produced political stagnation is the result.

Whoever controls language models controls politics. The regulation of AI – which the open letter by Musk and Co. calls for primarily as voluntary self-regulation by the industry – cannot be satisfied with mere ethical guidelines. To be sure, it is absolutely necessary to create legal regulations that prohibit deception by AI or the use of private data for LLM training without consent. (This is where a standoff between Big Tech and EU data privacy laws is in the offing.) But it is necessary to think big here as well.

If AI systems become the site of articulating social visions, a dominant factor in the make-up of the public sphere, or even a political infrastructure themselves, there is much to be said for actually subjecting them to public control as well. If this is taken to its logical conclusion, the last resort would be, horribile dictu, communization – in other words, expropriation.

Seen in this way, the open letter appears in a different light. Not merely as the technological catastrophism of a group of hysterical “longtermists,” but as an attempt to divert attention from the political consequences of this technology. For those are much more concrete than the dictatorship of the machines – but regulating them is much more dangerous for the companies and individuals who profit most from the hype around AI.


Posted

in

by

Tags:

Comments

2 responses to “Whoever Controls Language Models Controls Politics”

  1. […] of the ruling class while not being about that dominance. Literary critic Hannes Bajohr has warned against privatized GPT systems in just this sense, saying that “whoever controls language controls […]

  2. […] is also a point made by Hannes Bajohr in his article “Whoever Controls Language Models Controls Politics.” Bajohr envisions a future in which “political opinion-forming and deliberation will be […]