If Artificial Intelligence (AI) is to meet the potential some claim for it – autonomous intelligence – it will have to break free of its dependence on the human. In previous Blogs I have questioned that as a possibility. To act as a surrogate human is simply beyond the reach of any technology that mimics animal characteristics. We are just too complex. But further than mimicking human animal characteristics – being able to respond to the environment creatively – AI faces the challenge of moving beyond autonomous behaviour to meaningful behaviour. Now we are in a whole different league of challenge.
I have already pointed to the difficulties of reproducing the AI equivalent of interests and values – beyond the first evolutionary stage of seeking survival and reproduction. Intentionality, having a purpose behind an action, is saturated with consciousness, and, to repeat, we have little philosophical or neuroscientific understanding of what consciousness is. There are levels of difficulty associated with intentionality. Level 1: The chicken wants to cross the road. Level 2: the chicken wants to cross the road to get to the other side. Level 3: the chicken wants to get to the other side because the conditions for its survival are better there. Level 4: the chicken wants to improve its survival chances so as to advance the possibilities of passing on its genes. Level 5: the chicken wants to build a bridgehead on the other side for other chickens to share the advantage so as to advance ‘chickendom’. And so on. As we raise the levels we intensify the challenge of consciousness – spatially, biologically, socially, perhaps politically.
The human species far outstrips these levels in its evolution and each level intensifies the challenge for AI. In a competitive environment we have discovered (not alone in the animal world) the value of altruism and collective action – but also, incredibly, of selflessness. Indeed, it sometimes seems that being moral has transcended prudence (‘we had better be moral towards others or they will be immoral towards us’). I think Buddhism is a good example of this with its core aim of transcending individualism. Protestantism – which first gave us individualism in Europe – lags behind.
So ‘being human’ in AI terms comes to look like reproducing a glove – a limply shaped thing that lacks vitality and has only latent energy. Humanism comes to look like the human hand that fills the glove and gives it a purposeful shape, redolent with meaning, poised for dynamic energy. Humanism is agency. But it is, under the conditions of living in a society, agency with a purpose, and that purpose is to go beyond being in order to understand becoming and the benefits that brings. So the challenge to AI and its aspirations for the robot-child to be autonomous from humans is that it has to acquire a sense, not of what it is to be a robot-child, but what it might be to be something else. This is the transcendent imagination, the creative envisioning that we expect of young children in their play – and don’t young children seem to excel at it, until we massage it out of them with schooling?!
I want to look at just one aspect of this – the capacity to see nuance.
There are many sources of humanism, but by far the most influential in Western culture was the Renaissance (I have little authority to speak of Eastern and Middle Eastern cultures which, to my understanding, were once far more advanced than we in this area). Let me sum up a complex idea in just two images – one, early Renaissance, the other carrying the essence of the movement. Here is the first, a sculpture by di Duccio in the mid-15th Century:
This sculpture goes beyond mere iconography, though the Madonna and the child are abstract faces. Still, we are drawn to the tenderness of the mother, the contentedness of the child, an almost adolescent protectiveness of the surprisingly young mother. There is a glimmer of a story, here. In this sculpture, the Renaissance has set out on its humanistic journey towards enveloping the observer in the art, demanding that the artist and the looker combine to discover meaning. Here we see the early stirrings of narrative and psychology in solid stone. But now look at what is surely one of the highest accomplishments of humanistic Renaissance art, a relief in marble (detail) by della Robbia, showing the birth of discourse – it is no less than a depiction of Plato teaching Aristotle:
The fluidity and dynamism wrought of solid stone is breath-taking. One can hear Plato talking to his most cherished student – “look – I’m trying to explain to you – you’re so close…can’t you see!” and Plato, “but it says here – surely…!”. Beyond this even, there is a sense of a shift of ideas from one generation to the next – massive changes are under way – the evolution of understanding. The psychology of student and teacher are clear. The onlooker can engage with this depiction at many levels. I think of my adolescent interactions with Gerry Cohen, my first guru with whom I struggled to learn to disagree.
Here is one of the birthplaces of humanism. What has happened as we shifted from Duccio to Robbia is that our judgement of quality matured. Actually, we might talk of it in evolutionary terms. Both images have quality, but we can distinguish between one kind of quality and another – and we can know why it is that we want to make that distinction.
What does it take for intelligence to develop to the point where the subtleties and nuance of the shift from one of these images to the other can be seen? I ask the question in a biological sense more than rhetorically. How and why would AI enter into such nuanced thought? If AI can have a maturing sense of quality it must have some reference point and criteria against which to judge quality. Once again, since the robot-child has no biography, no accumulated experience of evolution, the reference point has to be provided by us, the builders.
Otherwise, what we would be asking of AI is that the robot-child could not only develop a sense of self (consciousness) but could also generate a sense of celebration of the self that it was aware of, and its potential. The shift from the first to the second image involves a higher-order psychological shift from appreciation to meaning-making, from perception to interpretation, from objectivism to subjectivism. In terms of the levels explored in the chicken crossing the road, we have moved well beyond Level 5, and well beyond epigenetics, too. We have entered into the philosophical realm where we consider existential questions, which move on from survival to raising questions about why we would want to survive at all. Now, whether that is a challenge that AI could ever take on – for the robot-child to ask whether its continuing survival has any worth – is a question worth asking.
There is a seamless connection between a worker engaged in a repetitive task and these Renaissance images. Grasping nuance, making meaning, celebrating appreciation and judgement of quality are some of the characteristics that make us feel human – not detached from animals, but animals of a particular evolutionary kind. ‘Artificial Intelligence’ is an oxymoron – a self-contradiction. In these Blogs I have suggested that whatever intelligence is, it cannot be artificial – one of its characteristics is that it is in all senses natural. It grows out of natural evolution, its roots are in lived experience and biography, it is of our nature as a higher animal. The moment we detach it from nature we nullify it, we neutralise its latency, we empty it of energy. What is artificial is in no sense intelligent; what is intelligent is in no sense artificial. AI scientists may produce robots that can do back-flips and recognise our faces, that can beat us at chess and hold a rudimentary conversation, that put us out of work and make us more conservative – but they are and always will be confined to lower-order tasks and interactions. The robot-child is a chick that may never leave its nest.