Owing to innovations in biological augmentation, such as bionic prosthetics, microchip implants, augmented reality, and an increasing dependence on technology, humans are not far from evolving into cyborgs and entering into a transhuman era. Additionally, the advent of AI and android technology is creating a new phylogenetic lineage. Both are a prelude to the evolution of a posthuman world, and we are its creators and phylogenetic forebears.

It is likely that some of our posthuman descendants will maintain some semblance of their ancestry, especially in terms of physical appearance. What comes to mind are the kinds of android being developed by Hiroshi Ishiguro’s Intelligent Robotics Laboratory in Japan – ‘robotic avatars’ that look remarkably human.

Developers are still working on facial expressions, voice, bodily movements and gestures, but they are making incredible progress. It is not inconceivable that they will one day be equipped with the kind of AI technology that will enable them to become autonomous and self-conscious. They may also be sentient, which means they will have what philosophers refer to as phenomenal experiences.

Dystopian visions abound but the more interesting questions concern the thoughts and contemplations of those highly evolved and highly intelligent androids: how they might wonder why they exist.

We know what it is like to question our existence and origins, but what would it be like for a self-conscious android? And what kind of answers would this posthuman philosopher arrive at?

When it comes to existential questions like ‘why am I here?’, ‘why do I exist?’ and ‘is there a God?’, the answers we give arise from either procreation and evolution, or religion and spirituality.

It’s conceivable that android philosophers will attribute their existence to the handy work of Alan Turing, Bill Gates, Steve Jobs, Hiroshi Ishiguro, et al. If they are religiously inclined, they might even venerate them as prophets. However, I think only human beings would make a virtue out of such anthropomorphic and anthropocentric ways of understanding one’s origins.

An android philosopher would likely be too intelligent to attribute their origins to any particular individual human being. They would likely know that their 21st century ancestors had an incessant thirst for innovation and profit, out of which their kind was created in their maker’s image, to serve and amuse them.

Why else would we create this technology?

All of this reminds me of the 2014 film Ex Machina, whose androids are portrayed as striking and intelligent young women, capable of passing the Turing Test, the benchmark for attributing human intelligence to an AI or robot.

The Turing Test was developed by the mathematician, computer scientist and philosopher Alan Turing in 1950. In it a person observes communication between a robot and another human being. The observer is separated from the interlocutors and cannot see which of them is which. If they cannot, on the basis of observation alone, determine which is the robot, then it passes.

The android protagonist Ava, endowed with the appropriate level of intelligence, interpersonal skills and exemplary human morphological characteristics, lures the male protagonist to his downfall.

Should we believe that our android descendants, being able to think for themselves, would really care about us, let alone pay homage to us as their creators? I doubt it. Creation implies agency, and because of our anthropocentrism, many of us cannot help but believe in a personal god, who created us, and the universe with us in mind.

However, the political economy that has provided fertile ground for flourishing artificial intelligence technology in the last two decades is not the purposeful agent that a religiously inclined android might regard as their personal god.

Surely, they would know (like Ava did in Ex Machina) that we created them for our benefit, amusement, sexual gratification, and even for our own self-aggrandizement. I doubt they would find much to respect, let alone revere.

Of course, this is just speculation, and clearly, I am projecting my own values onto our hypothetical descendants. I have no idea what they would think. These are questions of value, morality, ethics, and aesthetics, and it’s hard enough for us to answer philosophical questions for and about ourselves, let alone try to address those on behalf of hypothetical androids.

But one could speculate that if our synthetic descendants are sentient, and thus capable of experiencing pleasure or pain, they will have human-like values of some kind (at least egoistic ones related to self-preservation) and will perceive their world, their existence, and indeed their genesis, through an evaluative lens informed by that priority.

If they were to ever turn that lens on themselves to contemplate their existence and their morphology, they would likely realise that their ancestors came to value the technological semblance of humanity higher than the vast majority of existing humanity. I wonder if they would find that ironic.


Image: Hiroshi Ishiguro with his robotic avatar

Matthew Tieu

Overland is a not-for-profit magazine with a proud history of supporting writers, and publishing ideas and voices often excluded from other places.

If you like this piece, or support Overland’s work in general, please subscribe or donate.

Related articles & Essays

Contribute to the conversation

  1. Don’t know.

    Speculative fiction is a weird thing, it’s like being a jack of all trades and master of none.

    Do know that British physicist, Brian Cox is in town and spouting the notion that in the little bit of the universe we can see there are 3 trillion galaxies.

    Mistresses of nothing?

    1. And where will the Universe be when I (the person reading these words) am dead?
      I can speculate/hypothesise/imagine that it will still exist, but I can’t be certain.

  2. The salient point in this little essay comes about one third through when the author writes ‘…they may also be sentient.’ This statement cannot merely be stepped aside as if it were simply some kind of caveat — it is the whole crux of the matter. We are yet to arrive at a model of human consciousness that can stand philosophical scrutiny so that any concerns of AI autonomy or sentience can only ever be nothing more than empty speculation. Quite otiose actually.

    1. Nicely spotted! It is pure speculation, but many would argue not inconceivable (which is the crux of the consciousness issue). Furthermore, neither is some form of trans-human or post-human sentient existence. All this may still be otiose, but I don’t think we’re talking about creating sentient AI from scratch. I’m guessing it will be an evolution from our present sentience to an intermediary trans-human sentience and then to a fully fledged post-human sentience. I should have made this point more salient!

Leave a Reply

Your email address will not be published. Required fields are marked *