AI/1     Artificial Intelligence, Dadaism and Democracy

In this series of Blogs I want to focus on Artificial Intelligence (AI) which exercises us greatly today. Here we are again with the emergence of another ‘single narrative’ – computers will take over, put us out of a job, become more intelligent than us, robots will displace our futures. It would be foolhardy to oppose the idea. These are unstoppable forces, undeniable logics, backed up by….SCIENCE!

Really!?

At the immediate, material level AI threatens our jobs. Robots can do much of what we do (it is said), and we (workers) are likely to be confined either to helpless unemployment and cutthroat competition for the few jobs that will remain, or to the leisure society in which all are paid a basic wage and education/leisure pursuits replace labour. Which do you choose?

I personally would vote for letting robots take the strain while I concentrate on my writing. But I fear that the usual corporate combination of power and greed will deny us the leisure society and will merely plan for a greater distribution of poverty and despair. This is being modelled ruthlessly in the Great Austerity Project, after all, and that project is scoring run-away successes at distributing poverty. The real fear that looms behind AI is that it will serve merely to reduce labour costs and further intensify the concentration of wealth – that it will undermine democracy even more so than the (at the time of writing) Theresa May government. This is, perhaps, the real threat – not robots themselves.

The route to back to hope and humanism lies in insisting on a connection between AI and democracy. Wherever AI goes (in these Blogs I will propose severe limits on what AI can do for us – or against us) it will raise questions about citizen rights to wellbeing, employment, self-determination, cultural engagement and so on. All of these rights and more are threatened by the way we are thinking about AI and robots. Much work – even manual work – may involve the repetition of simple movements that can be automated, but all work is saturated with self-belief, identity, fear and hope, and collective endeavour. Work is where we find and lose solidarity and meaning. We may replace mechanical tasks and simple decisions, and we may make many people unemployed – but the greater wound to society would be the erosion of a meaningful life.

As always, science is far too important and dangerous to be left to scientists. Paul Feyeraband was a philosopher of science and something of an enfant terrible – he described himself as a philosophical anarchist (well, a Dadaist, actually, but few of his many critics understood the nuance). At one point he argued – not entirely tongue-in-cheek – that scientific theories should be voted on by the citizenry before they are accepted as legitimate. His point was that we would merely end up with a different configuration of scientific ideas – not a better or worse one – and, along the way, we would submit scientific authority to democratic scrutiny.

It’s hard not to be sympathetic.

In a more considered, perhaps, way Gary Werskey wrote a book entitled The Invisible College in which he writes the biographies of four leading British scientists and their left-wing ideological beliefs. He was exploring the relationship between science and democracy. In his Foreword Robert Young, the publisher, explained why he was publishing the book: “the future – indeed, the very existence – of civilization depends on getting right the relationship between expertise and democracy”.

Indeed.

So, before we accept too easily the single narrative that AI is both beneficial, aspirational and, in any case, unstoppable, we need to multiply the narratives around this major incursion into our future wellbeing. If we cannot vote on AI and all the expertise that clothes and protects it, we can at least have decent, searching conversations around it – we can and should submit AI to the judgement of ‘the social conversation’. As always in my Blogs, I start by insisting that all ideas can be rendered in conversational form – that there are no ideas that are free from popular, democratic scrutiny – no matter how hard scientists try to tell us that their work is elevated beyond deliberation. We can research and discuss all ideas. So in these Blogs I will write about psychological and learning theory, epigenetics, the Renaissance (again), and moral philosophy – but all, I hope, in accessible, easily graspable and fun ways that do not do violence to the science. As abstract and dry as these topics may sound, it is actually entertaining – and deadly serious – to interact with them and I hope these articles are provocative, informative and engaging. And I hope that if you read all three Blogs that you emerge with a raised eyebrow and a tricky question next time you are told that the robotic future is bleak.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s