Published 8 June 20234 July 2023 · Technology ‘AI’ and the quest to redefine workers’ autonomy Rob Horning The phrase artificial intelligence is a profoundly ideological way to characterise automation technologies. It is an expression of the general tendency to discuss technologies as though they were ‘powerful’ in and of themselves—as if power weren’t a relative measure of the different capacities and prerogatives of social classes. Instead, ‘artificial intelligence’ seems to suggest that the technology develops itself for its own reasons, exercising its capabilities independent of human political struggles. Its implications and consequences appear as uncanny and obscure—what does AI want? will it enslave humanity?—displacing to a speculative future the relentless malevolence that capital already abundantly manifests and which has already animated technology’s development. There is nothing particularly mysterious in the advances in machine learning fuelling the current AI mania. They derive from a massive expansion in surveillance capabilities, and the emergence of companies large enough to centralise and exploit all the data that they have unilaterally seized. Through the profligate application of energy-intensive processing power, this data is converted into predictive simulations of diverse work activities. Sometimes the point of the simulation is to replace human workers, as in the anecdotal cases highlighted in a recent Washington Post report on copywriters who have purportedly lost their jobs to ChatGPT: Experts say that even advanced AI doesn’t match the writing skills of a human: It lacks personal voice and style, and it often churns out wrong, nonsensical or biased answers. ‘But for many companies, the cost-cutting is worth a drop in quality. The simulations can also operate not to replace workers but to discipline them. They serve as a standing reserve army of would-be scabs ready to work to a lower standard at lower cost, and also as normative points of comparison, allowing for the transfer of control over work processes to management. The simulations furnish data supporting management-imposed conceptions of how jobs should be performed are feasible and sustainable without human workers’ input. This is in keeping with the surveillance-based management practices prescribed since the advent of Taylorism, if not before, as detailed by Meredith Whitaker in her account of the theories of Charles Babbage—an early proponent of computational machines. Babbage’s ‘ideas on how to discipline workers,’ Whittaker argues, ‘are inextricably connected to the calculating engines he spent his life attempting to build.’ In the same way, ‘artificial intelligence’ is inseparable from capitalist efforts to manage labour profitably—profit provides the standard for what counts as ‘intelligent,’ just as ‘smart’ devices are the ones that place us under surveillance. Like Taylor’s time-and-motion studies, predictive simulations appear as correctives to the employee’s inefficient use of their own embodied, cognitive abilities, abstracting from any contingent conditions to propose standards or outputs that can supposedly be valid in any given case. The dimension of abstraction, which renders workers interchangeable, is more significant than the standards or outputs themselves. Predictive simulation, as Sun-ha Hong has argued, is primarily not a technological instrument for knowing future outcomes, but a social model for extracting and concentrating discretionary power: that is, people’s ordinary capacity to define their situation. Whoever imposes such systems cares less about the product—the generated output of a large-language model, for example—than how the systems disempower those who are subjected to them. The ‘social model’ that predictive systems presuppose—in which each individual worker’s contribution can be explicitly assigned and represented in terms of repeatable instructions—is more important than specific predictions. The uptake of automation technology, from this view, depends not on its performing work well enough but on its rendering work as data. It will seem useful to bosses to the degree it makes workers’ knowhow seem useless. Karen Levy’s Data Driven, a recent study of how new forms of surveillance have affected the long-haul trucking industry in the United States, examines this process. In the case of trucking, the Federal Government has mandated the installation of monitoring devices to prevent drivers from violating regulations dictating how many hours a day they can drive (rules that private companies seemed inclined to look the other way toward). This has allowed companies to install monitors that track far more data about truckers’ performance, creating data flows that eliminate the driver’s discretion and shifts decision-making to automated and algorithmic systems. As Levy notes, long-haul trucking makes for an interesting case for studying the effects of automation, because the industry relies heavily on compensating workers with vibes. Trucks are understood by their drivers both as workplaces, relatively free of meddlesome bureaucratic oversight, and as homes, in which they live, eat, and sleep for days or even weeks at a time, and in which their privacy is sacrosanct. And similarly, understanding trucking as the mere activity of driving a truck is only one facet of what trucking means to those who call themselves truckers. Trucking work is bound up with cultural constructs of manhood and virility, performed through displays of physical and mental stamina. Balanced against the industry’s dangerous and exploitative conditions is a compensatory sense of independence grounded in the illusion of the boss’s absence. A similar logic applies to working from home when it is construed as a special employee benefit rather than a means of enhancing productivity. In both cases, the apparent freedom from direct human oversight serves as a pretence to impose automated forms of surveillance, subjecting more of the worker’s time and behaviour to measurement and conversion into data. Under surveillance, the work is reshaped to become more machine-readable, and more of the worker’s effort must be directed toward accommodating the monitoring rather than devising the most suitable ways to complete tasks. As Levy puts it, monitoring abstracts organizational knowledge from local and biophysical contexts—what is happening on the road, around the trucker, and in the trucker’s body—to aggregated databases, and provides managers with a trove of evidence with which to evaluate truckers’ work in new ways and challenge truckers’ accounts in real time. Such intensified surveillance paves the way toward modifying more work processes in keeping with that data while seeming to substantiate the possibility the employer can ultimately automate the job altogether. As work becomes more surveilled and less autonomous, it becomes more cumbersome and more substitutable at the same time. Under such conditions, ‘autonomy’ is construed less as doing things one’s own way and more as resisting the monitoring that strips autonomy away. All the forms of ‘tacit knowledge’ (to use Michael Polanyi’s term) that go into work become less defensible as a source of productivity and more dismissible as mere employee truculence. Worker autonomy thus persists not as a particular kind of virtuosity and social practice carried out in concert with other workers but as an individualized and inflated identity fantasy (ie, the trucker as ‘lone wolf’ or ‘asphalt cowboy’ conquering the ‘open road’). At the same time, it remains an ambient justification for management’s ever deeper intrusions into worker behaviour, regardless of how much surveillance has already been implemented. As more surveillance is put into place, what escapes it becomes both more salient and more irrelevant simultaneously. Focusing on warehouse workers compelled to wear devices that monitor and correct their activities, Hong writes: Quantified expectations governing the algorithmic workplace cater to managers and employers’ desire for a certain kind of inhuman clarity, in which the many variations and ambiguities inherent in any act of labor are not actually eliminated, but simply neglected. The consequence is that for the worker, their own work and life becomes both less predictable and less discretionary. For people working from home, this plays out in various forms of ‘bossware’ monitoring suites installed on a workers’ devices (as detailed in this UK report). For truckers, Levy speculates that this may play out in increasingly invasive forms of biometric surveillance: Rather than being kicked out of the truck cab by technology, the trucker is still very much in the cab, doing the work of truck driving—but he is increasingly joined there by intelligent systems that monitor his body directly and intrusively, through wearable devices and cameras, often integrated into the fleet management systems … AI in trucking is experienced as a hybridization of man and machine. Surveillance and automation in trucking are complements, not substitutes. That surveillance and automation generally tend to appear as ‘complements, not substitutes,’ puts the idea of AI ‘augmentation’—an often-evoked potential silver lining that imagines workers assisted and even empowered by automation technologies—in a clearer context. Much AI, when implemented by management, is not some different kind of cognition but a more responsive form of employee oversight. Like any other information technology, it can be inserted, as Levy notes, between work tasks and embodied knowledge—by breaking up work processes into discrete, rationalized, lower-skill tasks; by decontextualizing knowledge from the physical site of labor to centralized, abstract databases; or by converting work practices into ostensibly objective, calculable, neutral records of human action. Its purpose is not to empower workers but to ‘legitimate some forms of knowledge while rendering others less valuable, to potentially detrimental effect on worker power.’ Such technology, sometimes euphemistically described as a ‘co-pilot’ in the context of coding or other language tasks, is introduced to narrow the worker’s focus and sense of possibility to only the kinds of embodied practices that can be expropriated, that are always already subsumed by capital and are profitable for management. ‘AI’ appears not as replacement or an ‘augmentation’ for workers but as what Levy calls ‘compelled hybridization.’ It is deployed as an intimate and dynamic overseer, or worse, as a parasite capable of altering its host’s behavior. Levy cites Heather Hicks’s 2008 book The Culture of Soft Work, which describes how when ‘work activities coded in machine parts suffuse the human body, the results are humans not liberated, but under control.’ The truckers Levy consults are repulsed by the idea of cyborg trucking, of being a meat puppet inhabited and impelled by capitalism’s machines to maximize their self-exploitation. ‘This is the felt reality of AI in trucking labor now,’ she writes. And the ‘algorithmic destruction of workers’ bodies’ in wearable-directed warehouse work that Hong describes is unmistakably grim and dystopian. But one can also imagine a hybridizing interface that combines the emotional manipulation of chatbots with the Skinner-box prodding of algorithmic management, such that the parasite makes you love its source, much as a Toxoplasma gondii infection makes people love cats. Perhaps it will look like Apple’s recently demoed Vision Pro goggles, or perhaps it will look even more absurd. In late March, OpenAI published a working paper called ‘GPTs Are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.’It is effectively a piece of marketing aimed at managers that hypes ChatGPTs potential to perform tasks abstracted from a range of different occupations, described as being ‘exposed’ to LLM predation. This methodology takes for granted and naturalises the effects of information technology that Levy had singled out: dividing labour into discrete tasks, abstracting them from specific contexts, reducing them to data. The authors use it to conclude that ‘most occupations exhibit some degree of exposure to LLMs, with higher-wage occupations generally presenting more tasks with high exposure.’ These findings (which should be taken with a grain of salt) reverse the conventional assumption that whatever can be automated is ipso facto a ‘low-skill’ task from which human workers will ultimately benefit from being liberated. Instead, it promises managers a future in which more of their subordinates can be pushed out of positions that permit them to exercise judgment. The list of ‘occupations with no labeled exposed tasks’ to the new wave of ‘AI’ is telling. It includes a wide variety of ‘equipment operators,’ ‘helpers’ and ‘repairers,’ as well as more colourful possibilities like ‘roustabouts,’ ‘slaughterers,’ and ‘fish trimmers.’ Many of the positions revolve around energy extraction: ‘derrick operators’ and ‘electrical power-line installers.’ Perhaps hippies will be reassured by the inclusion of ‘motorcycle mechanics.’ These jobs most obviously share robust physical requirements, giving the implication that ‘AI’ has rendered everything else about us to be more or less economically useless. It suggests that a future dominated by automated cognition won’t be one in which humans are freed from the ‘bullshit jobs’ David Graeber complained about to pursue some radical reordering of the sociopolitical life-world. Instead it suggests human work reoriented toward the maintaining the mechanisms of capitalism in a more literal sense—feeding them data and energy while making sure we keep our bodies suitably fit as they become the biomechanical extensions of software programmed for extraction. Image by Ivan Bandura Rob Horning Rob Horning is a former editor of New Inquiry and Real Life. He writes about media and technology at robhorning.substack.com. More by Rob Horning › Overland is a not-for-profit magazine with a proud history of supporting writers, and publishing ideas and voices often excluded from other places. If you like this piece, or support Overland’s work in general, please subscribe or donate. Related articles & Essays First published in Overland Issue 228 17 August 202322 August 2023 · Technology Authentic labour: how influencers train us for the future of work Rob Horning Social media could be supposed to have surfaced a latent consumer demand for media that feature more ‘relatable’ stars with more diverse kinds of talents, who can serve as parasocial companions and provide ongoing inspiration for how to live a more mediagenic life. But in practice, all influencers necessarily share the exact same talent: navigating algorithmic recommendation systems and learning how to maintain attention through their protocols. First published in Overland Issue 228 12 July 202318 July 2023 · Art Make it new? Art and knowledge in the age of automated content generation Emily McAvan In its combination of the already-known, AI cannot respond to the important challenges of our age–climate change, various forms of prejudice, the inequalities of capitalism and settler colonialism, incipient fascisms. We need, more than ever, art and thought that gives us the authentically new, that tells us something about human life beyond the norms of media normativity of tech platforms and billionaire tyrants.