Foresight Analysis: Bad Robots

MIX Magazine Trend Drivers

Simon Jones, in a report for MIX Magazine in issue 42, finds that as the technology for artificial intelligence becomes ever more sophisticated, the benign Disneyfication of robots is receding, replaced by a new anxiety. He asks, are we prepared for the moral maze of creating machines with sentience?

Image by MIX Magazine

If film and television reflect current societal concerns, we are very worried indeed about the idea of artificial intelligence (AI). In the UK, Channel 4 series Humans, AI is made into a commodity, a must have gadget called a Synth, while Ex Machina looked at the potentially blurred line between real and artificial beings. And, not surprisingly perhaps, the ultimate robot-gone-bad film Westworld (1973) was reimagined by HBO, described as a ‘dark odyssey about the dawn of artificial consciousness and the future of sin.’

There have also been a slew of books tackling the subject, including James Barrat’s Our Final Invention: Artificial Intelligence and the End of the Human Era, Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies and Martin Ford’s Rise of the Robots: Technology and the Threat of a Jobless Future. And even surveys; Sky News found that almost half of Britons fear robots could destroy humanity and six out of ten think that the Government should protect their jobs from increased automation.

What all these films and books have in common is a profound sense of unease about the ethics of artificial intelligence. While we are happy to consider the possibility of outsourcing housework, child and senior care, we are also aware of the potential of at best being replaced, at worst being harmed. And, of course, in creating artificial life that potentially replicates the human capacity for sentience, we also have to consider a whole raft of ethical dilemmas; if we generate independent thought in AI does that mean it is ok to create slaves? Do robots have rights?

Dr. Kathleen Richardson, Senior Research Fellow in the Ethics of Robotics Centre for Computing and Social Responsibility at De Montfort University in the UK, believes it is important to be clear about how we think about robots. “Artificial intelligence and robotics is really at an early developmental stage. AI/Robots do not autonomously make decisions or choices on the basis of their own free will. I think those that say AI is at a stage of taking over are really expressing their own psychic fears and projecting them into machines,” she explains.

 

Image by MIX Magazine

Creating an autonomous identity for AI is also dangerous because it can be easy to forget that these technologies are entirely culturally constructed. “The people who make these technologies import models of race, class, sexualities and gender into them in different ways. When AI/Robots create havoc in the world it’s because research scientists have done it, the machine is only a proxy for the unethical person behind the machine,” says Richardson.

Dr. Richardson is in good company in terms of her concerns; Bill Gates and Professor Stephen Hawking have both expressed reservations, Hawking going as far as saying that artificial intelligence could “Spell the end of the human race.”  In an Interview at MIT AeroAstro Centennial Symposium, CEO of SpaceX Elon Musk added: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that… I’m increasingly inclined to think that there should be some regulatory oversight.”

If we are to create some kind of global regulatory manifesto, we need to get a move on. Artificial intelligence is already part of our day-to-day life, it already controls search engines, browser history and some financial markets. While it remains unlikely we will be fighting off killer robots any time soon there will certainly be new challenges and ethical dilemmas, not least because artificial intelligence could potentially mirror many of our own weaknesses. However, as the science is unpredictable, there is no way of knowing how artificial intelligence will develop. As Dr. Richardson explains: “We can make choices as a society. We can let unethical technologies wreak havoc in the world, or we can draw attention to the role of those that are programming and producing these technologies.”

Images by MIX Magazine

channel4.com

dmu.ac.uk

www.hbo.com

web.mit.edu