One question is whether the computer may replace the human – I will raise argument with that. A separate question is whether it will hold a candle to humanism. We’ll see.
For now – sausages.
When ‘mad-cow disease’ burst onto the scene (MCD, Bovine Encephalitis or Creutzfeld-Jacob Disease in its human form) it shook me. The beef sausage was a regular favourite with my English breakfast. What goes into a regular beef sausage sometimes has just a tenuous connection to the cow, but cow’s brains are certainly there. I may often have ingested the rogue protein that gives rise to MCD. On a dull Saturday morning (the day of my breakfast treat), overcast and feeling low with work I would shrink from the sausage – do we truly think that the meat industry would stop putting brains into sausage just because DEFRA regulations prohibit it? The very sight of a sausage evoked the dark menace of a 30-year incubation period and the horrors of the disease. What if…?
But there were occasions when I felt airy and the sunshine on the Gloucester Road cut into my vision and I would breezily tell myself that the damage has already been done or that no beef farmer in their right mind would continue putting brains into sausages any more. I would order sausage. What are the chances…?
My thinking is not linear, it intersects with my emotional state and it is mediated almost arbitrarily through immediate circumstances and surroundings. A massive challenge for Artificial Intelligence to reproduce, given the unpredictabilities and caprices involved. Whimsy is hard to reproduce artificially, and yet it underpins many of our daily decision-making.
It is the non-linearity that is interesting, because that does not mean irrational or incoherent. On non-sausage-eating days I respond to the scale of the potential consequences of ingestion; on sausage eating-days I was responding to the scale of the risk. These are two quite distinct forms of rationality – each coherent in their own way. Similarly, we can respond to air flight or terrorism in two ways. The chances of a terrorist incident touching us or of a plane falling out of the sky is vanishingly small – the scale of the risk is minute. But if either were to happen the scale of the consequences would likely be massive. On Mondays Wednesdays and Fridays I am mindful of risk; on Tuesday, Thursday and Saturday I am mindful of consequence. Sunday I stay home and don’t bother with terrorists, planes or sausages. Every day is coherent.
This is, indeed, one of the principle challenges of Artificial Intelligence – how to detach rationality from linearity – and maintain coherence. Of course computers can be programmed to ‘learn’ from trial-and-error, or to act in arbitrary/random ways that make sense within parameters. These are simple challenges. Replicating what happens in the human mind and heart as we shift from risk to consequence is of a different order. Why?
There are a number of reasons, but one that I will dwell on. It is that within us there are conversations that happen at varying levels of consciousness – I speak with myself, as it were – in ways that defy our understanding, and so defy our capacity to ‘machine’ it. Consciousness remains the deepest mystery of philosophy and neuroscience (who is the ‘self’ inside your head that tells you who you are?). Here’s a quick introduction to the many facets of the problem:
https://www.youtube.com/watch?v=ir8XITVmeY4
What you notice here is that whatever consciousness in the end is, its origins lie in evolutionary, material needs. We need to be able to predict the behaviour of a competitor, for example, and so need a sense of ourselves – what a predator is looking at. Indeed, contemporary philosophy suggests that morality (what is the right thing to do) grew out of prudence (what is the sensible thing to do for survival). So consciousness is shaped by a long history of experience, behaviours, thoughts and so on. It is biological – but at the highest order. Can AI just short-circuit those histories and jump in at the end part? Or can AI researchers recreate the historical rationale experimentally? Is consciousness just a surface behaviour that can be stripped of its historical meaning?
We have had various stabs at working out aspects of consciousness – self-awareness. One classic attempt is Freud’s contention that there are different levels of cognition – a subterranean, underlying level at which we frame responses to past experience (the id), with upper, more conscious levels determined by it (the ego and super ego). It is at the subterranean (unconscious) level at which motivations are determined. The super ego is the constant self, present since birth, and the ego is the level which interacts the others to create behaviour. Knowledge, experience and insight are processed across these three levels, often in a way we are not aware of.
This business of ‘levels’ of consciousness runs right through psychology and through the more sophisticated approaches to learning theory (as an aside – for whoever happens to be an education minister – since we do not know what consciousness is it is impossible to now what learning is – i.e. changes to consciousness. Of course, we can pretend – which is what we mostly do.). So, for example, Michael Polanyi famously suggested that we interact with knowledge in two ways: at the ‘intuitive’ and the ‘propositional’ levels – a Freud-lite approach. At the intuitive level we do things and get by….well…intuitively – which is to say that our consciousness is suppressed – we are not aware of what we do, we are on ‘automatic’ – like riding a bike. A teacher teaches the way they do because it feels right or because that is the way things turn out with her personality. This is efficient in helping us to manage complex actions on a daily basis. We couldn’t ride a bicycle if we had to constantly be thinking about balance, direction, locomotion, etc. We assign all that to the intuitive level. But if we want to improve our teaching, take control of it – if, say, we were Olympic cyclists and wanted to examine every source of every ounce of energy, then we switch to ‘propositional’ mode – we put our understanding and experience into terms that we can share with others – through which we can ‘propose’ things – and we do look at all the complex elements of riding the bike. This is much similar to a modern popular preoccupation with what is called ‘dual-system thinking’ – what Daniel Kahneman calls ‘thinking fast’ and ‘thinking slow’, or ‘System 1’ and ‘System 2’ thinking. I’ll leave you to look that one up – it’s all over Google.
Rom Harré, a cognitive psychologist of some standing, takes this a little deeper. He has the notion of an ‘inner’ and an ‘outer’ or ‘social’ conversation. As a ‘person’ we interact with others in the social ‘conversation’ – we talk in pubs, we fight in boxing rings, we watch telly or argue in parliament. The result of these experiences we take back with us into our minds and process them cognitively – actually, in ways that are mostly hidden to us. We learn from the parliamentary argument or the boxing match. We take that learning back, switching from ‘person’ to ‘self’, and the self reflects on it, shapes and reshapes it – most obviously by measuring it against our values and our interests. Now this raises the Freudian question of where values and interests come from – other, possibly deeper, levels. What might a deeper level be for a robot-child? Can a robot-child have these inner levels, which seem to be so important in giving rise to our sophistication?
But we know that we are driven by some interaction between values and interests – are we in this job/relationship because we believe in it, or because it suits us or what proportion of each? Now we shift from the human to the humanist and to the upper reaches of challenge for AI. All our actions in Harré’s ‘social conversation’ are driven by a combination of interests and values – basically our motivations towards ourselves and those aimed at others. The balance we strike – for example, the balance between our selfishness and our altruism – is a measure of our humanism. We can think of ourselves as more or less humanist, and we can think of our collective (e.g. social policy) as more or less humanist. There can be little doubt that as austerity-peddling politicians have played on our personal fears and exploited our economic vulnerabilities, so, in the balance, we have allowed society to become less humanist in its treatment and consideration of the poor and the vulnerable.
Even if we could answer the question, what is there for a robot-child to value we must ask how does an AI machine enter into a struggle between what it values and where its material interests lie. But this begs the deeper question, how does it use internal equivalents to cognitive processing to resolve that struggle so as to arrive at a decision on action? How would an AI robot-child vote? Is it more likely to vote Conservative or Labour?
By necessity, the world envisioned by AI is one in which our aspirations for the robot-child is reduced to a lowest common denominator. The robot-child may well have material interests (it has to survive), it will have a sense of prudence (what is the best/safest option among many) and it will learn in order to more efficiently manage external transactions with its environment. It may have a sense of competition and may even be driven by its material interests into co-operation and altruism. But this is still at the level of a chick in a nest. The higher functions require higher-level attributes, which we still do not fully understand: empathy, value, existential fear, loathing and desire. Some would say (Richard Rorty and John Rawls, leading contemporary philosophers of humanism, would go along with this) that material interests tend to lead us towards selfish conservatism – and to vote Conservative; while values-based interests tend to lead us towards empathy and altruism – and to vote Labour.
At this point there is an important detour into epigenetics, which I will look at briefly in the next Blog in this series. But to recap, we are exploring the complexities of consciousness which make artificial reproduction of thought, intent and action a remote possibility, if at all attainable other than at the level and speed of evolution (meaning that AI might have to go through millennia of evolution to catch up with who we are as humans).