The philosophy of Humans

Humans, the UK-US adaptation of a Swedish series about the complexities that arise when a super-realistic, super-responsive form of android becomes of use in the home, is premiering on Australian TV – insofar as that last clause describes a meaningful event any more. It produces, in this viewer, the same combination of interest and extreme irritation as does much sci-fi to those not addicted to the genre.

The reason is not hard to see. The whole set-up is asinine from the start, but in such a way that is of philosophic and political interest. The androids are made to look like humans, and used for a variety of domestic and care work. They can interpret and communicate in language as any adult, and they are stuck between worlds of meaning and feeling – as well as being unfussed about being switched off at night.

In other words, it’s all a jumble. The androids have crossed the hermeneutic line, of having a world of vestigial meaning – yet they have not yet developed a vital and continuous care about continuing to exist; an autonomy. They remain complex machines, yet they are capable of interpreting nuanced human utterances – even though such interpretation demands the capacity for there to be meaning, to understand by having intent. And so on. In that respect, it’s simplistic sci-fi, grooving on cheap wonder.

It’s also obviously about domestic, affective and other such labour – people brought from half a world away to care for aged white folk, or clean up their homes. The feeling that an android is ‘partial’ mimics the awkward relationship first-world people have with such staff. Without the cocoon of old classist and racialist ideas of people’s ‘proper station’, the encounter with a real other must be had over and over again. Language barriers, various forms of implicit condescension protect one for periods – and then you see a hospital cleaner in janitor scrubs and shoe covers sobbing from homesickness into a 7/11 payphone, and the full humanity returns with a jolt. Maybe it’s not coincidental that the show came from Sweden, which has only recently augmented largely political and student immigration, with larger numbers of globalised workers.

But that’s less interesting than thinking about where the central error of all such shows lies – and why, beyond a certain point, they’re boring and one-dimensional ideas about the future. At their core, they’re Cartesian: dualistic, not about body and soul, but about body and consciousness, or subjectivity. They extrapolate from the development of recursive ‘neural network’ computer design, which allows programmes and machines to react, learn and adapt, and presume that the accumulation of such abilities will eventually cross a hermeneutic barrier, enabling machines to learn from and respond to the communication and actions of everyday humans. This is the ultimate idea arising from analytic philosophy, coming from Descartes, via Hume to Russell and others. In the Anglo world, it has become the naturalised idea about how the world works – for scientists and just about everyone else. The master error is that meaning – of a sentence, of a gesture – comes from its representational content, and not from the desire or intent which it is bodying forth.

That conception of meaning and being comes from the other side of the Cartesian heritage – out of Descartes, and via Kant, Hegel, (Marx), Husserl, etc. In this version, meaning is always embedded in a whole world; the being of the subject is simply one side – a pole – of which the other is the intersubjective and multilayered world. Developing as an embodied, born, human, and growing into these myriad worlds may not be the only way to become such, but it’s the only way we know so far.

Then there is the third, ‘suspicious’ tradition of will and desire by Schopenhauer, Marx, Nietzsche, Freud, Lacan – and later figures such as Gregory Bateson, for example. In this, there is neither meaning, nor subjectivity without desire, arising out of need. The infant is sufficiently wired up for instinctual needs to become desires which in turn shift, recombine, and eventually make possible selfhood and being-in-language.

And it is these two traditions, which raise the question – how could an android possibly want anything? And if it cannot want anything, how could it possibly enter the world? And if it could not do that, how could it even begin to act and react in a meaningful way, beyond the simplest repertoires gained by pattern recognition?

There is of course, a counter argument to this critique, coming out of approaches such as Latour’s actor-network theory and the like (which is for another time), but one big piece of evidence in favour of the ‘suspicion/desire’ tradition is the absence of any real progress in ‘situated’ machines. There has been none. They are as incapable of the simplest interpretations (as opposed to pattern recognition) as they always were.

That doesn’t mean such machines could never be created. But to have meaning in our world they would have to be other-oriented – in other words, they would have to be raised. That doesn’t imply biology or organic composition – but it does imply some form of layered development in which the subject is produced by their desires.

And that would remove the ethical and ontological dilemma at the heart of Humans. Because the creature – whatever it was and whatever it was called – would unquestionably be a person. Which may be the conclusion that a lot of such sci-fi is trying to help us avoid.

Guy Rundle

Guy Rundle is currently a correspondent-at-large for Crikey online daily, and a former editor of Arena Magazine. His ebook, And the Dream Lives On? Barack Obama, the 2012 Election and the Great Republican Whiteout, is forthcoming.

More by Guy Rundle ›

Overland is a not-for-profit magazine with a proud history of supporting writers, and publishing ideas and voices often excluded from other places.

If you like this piece, or support Overland’s work in general, please subscribe or donate.

Related articles & Essays