Reciprocal human investment


Ever since the introduction of ChatGPT, we have been repeatedly warned of a coming flood of automatically generated content that will inevitably overwhelm us. Email inboxes, social media platforms, retail sites, and search engines will all be rendered useless by automated spam, as if we weren’t already underwater. This panicked discourse sometimes proceeds as though there were still spaces where gatekeeping mechanisms worked for us rather than in the interests of advertisers.

We are, of course, already systematically inundated with messages that we don’t want and which are meant to manipulate us. The spaces most vulnerable to being swamped with AI material — which, like advertising, is interruptive and untrustworthy by definition — are those that have already been lost to the older modes of ‘inauthentic content.’ Platforms are already so routinely devoid of presence and substance that no one seems to expect that machine-made posts would stand out amid the flow. The AI spew may even ultimately re-enchant conventional advertising, giving it a kind of nostalgic, handicraft genuineness by comparison.

The advent of generative AI has helped us pretend that automated communication and all that it implies remains a future threat rather than an ordinary aspect of contemporary life and the full instrumentalisation of human relations. In a recent piece for The New Republic on the ‘year in AI,’ Lincoln Michel recounts watching a video demonstration of Google’s AI tools:

The video featured AI briefly summarizing emails someone hadn’t read. Then it demonstrated generating new emails to reply with. It’s easy to extrapolate. The recipients will use AI to avoid reading that email and generate new AI replies for others to avoid reading. How soon until everyone’s inbox is overflowing with emails no human has read or written? Why stop at emails? AI can write book reviews no one reads of AI novels no one buys, generate playlists no one listens to of AI songs no one hears, and create AI images no one looks at for websites no one visits.

But none of that climate of avoidance is particularly new. I have to admit that my first reaction to this kind of foreboding is indifference. Who cares if AI books are reviewed by AI critics? No one is going to force me to engage with any of it; in fact, it will exist only to reinforce the idea that no sort of engagement is required with anything. The machines can go on making all their nowhere plans for nobody, which by contrast will help make me feel like I am comfortably somewhere.

Michel’s concerns echo a critique that has been familiar ever since autocomplete tools were first introduced: everyone will just point their respective AI assistants at each other like mirrors and generate an algorithmic mise en abyme, an infinite recursion that proliferates more and more text with ever dwindling substance. Only here it is extended from a work context to encompass the entire cultural sphere, as though generative AI’s capabilities will compel all of us to view culture as just another bullshit job (to borrow David Graeber’s term), a perpetual circulation of meaningless tokens as busywork, universalising the feeling that any kind of communication must be perceived as a chore to be optimised.

Rather than a reciprocal exchange, an orchestration of a common purpose or common interests, communication appears as abstract labor — an intrinsically empty and fungible form of work that can be performed unilaterally by anyone or anything, and whose only purpose is to be made more efficient.

The idea that technology will extricate our consciousness from the rote communication demanded of us is sometimes presented optimistically, as though this will free us up for the really meaningful conversations we haven’t been able to make time for. For instance, in 2018 Google touted that its Smart Compose feature had ‘already saved people from typing over 1 billion characters each week—that’s enough to fill the pages of 1,000 copies of Lord of the Rings’ — a bizarre statistic that seems to suggest that imaginative writing is no more than typing and the effort  to type one kind of document can be immediately diverted to typing more rewarding ones.

But automation doesn’t free people up for meaningful tasks — it deskills the tasks they are required to perform, making them more rote, depleting, and mind-numbing. There is no reason to suppose that generative AI will do anything different to language-oriented tasks. It will make them less meaningful to us as we have to do more of them. Think of the piece workers described in this exposé by Josh Dzieza, clicking yes or no on an endless series of decontextualised language fragments to train tomorrow’s AIs.

Part of the critique of autocomplete and now generative AI is that they will cause the capability and, eventually, the desire to write to atrophy: they won’t make unwanted communicative tasks easier — they will make the entire idea of communication seem harder. Given that so much of the language we encounter will have been generated to stupefy and deceive us, it becomes easier to imagine that people will see language use itself as tech companies expect we us to, as a hassle, as something more and more difficult to initiate with any expectation of good faith. Of course, if you don’t care about speaking in good faith, AI will be very helpful to you.

Michel’s vision evokes a future in which the spaces for ‘meaningful conversation’ (whatever that was) will be polluted with an unstoppable avalanche of automated chatter that will make it pointless for us to try to compete for attention there. It assumes that automated content, like advertising, will be injected into any available discursive space to pre-empt the possibility that any sort of intersubjectivity will develop.

Generated content creates a retroactive illusion about the ‘meaningful interactions’ we’ve lost, framing a fantasy of some form of purer communication that we must get back to, a reified thing that we can achieve unilaterally through some supremely earnest act of authentic being. But then nothing is ever real enough, not merely because it is immediately subject to simulation or contamination but because ‘realness’ depends on reciprocal human investment, and generative content will make that harder to perceive. Instead we will spend more time engaging with agents that constitutionally can’t reciprocate: both the machines and the companies they speak for.

It’s not that we will come to depend too much on chatbots — that we will come to find them adequately informative or emotionally satisfying — but that we won’t have the resources to prevent chatbots from continually haranguing us from all corners and all channels. The skills necessary to communicate with other people or to even carry out an inner dialogue with oneself will presumably erode as we are cocooned in thickets of automatic language aimed at eliminating the need for any effort of attunement. The AI-generated books will read themselves and tell us what they are about, and we won’t be able to get them to shut up about it.

 

Image: MC Escher, Convex and Concave

Rob Horning

Rob Horning is a former editor of New Inquiry and Real Life. He writes about media and technology at robhorning.substack.com.

More by Rob Horning ›

Overland is a not-for-profit magazine with a proud history of supporting writers, and publishing ideas and voices often excluded from other places.

If you like this piece, or support Overland’s work in general, please subscribe or donate.


Related articles & Essays