Wendell Wallach and Colin Allen: Moral machines: teaching robots right from wrong

Alan Turing took an active interest in ‘‘The Imitation Game’’, in which people were challenged to distinguish between male and female, or human and machine. There are now regular public events at which ever more sophisticated computer systems are presented as indistinguishable from human beings, according to the judgement of those who engage them in text based dialogue. Wallach (from Yale University) and Allen (from Indiana University) bring a new fervour to the old debate, reflecting their status as true believers. Citing market and political forces, they argue that computers should be used to make morally important decisions, despite the fact that we cannot predict the impact of a new technology on society until well after it has been widely adopted. They conclude that ‘‘it is incumbent upon anyone with a stake in this technology to address head-on the task of implementing moral decision making in computers, robots and virtual ‘‘bots’’ within computer networks’’. Two conclusions emerge from developments since Turing’s tragic early death, and from events since Wallach and Allen submitted their manuscript for publication. Turing was right in predicting that there would come a time when computers were routinely described as intelligent, and that it would be difficult or impossible to distinguish between a human and a computer. In this era of call centres, which could be located anywhere in the world, and where operatives work from prescribed scripts, how can we reliably know whether we are dealing with a ‘‘real person’’? Can we always distinguish a computer error from a bureaucratic error? Behind the recent traumatic events of the Credit Crunch and the Global Economic Crisis, we have yet to unravel the contribution of new technologies. It is clear that the financial services industries in the USA and UK developed a dependency on complex financial instruments, and that senior executives lacked the capacity to understand what was going on. Because the flows of income seemed both substantial and reliable, there were pressures not to ask overly awkward questions. Only after the event are questions being asked. As it happens, derivatives contracts could in principle be expressed as functional expressions, and evaluated. The obscurity and confusion did not derive from the technology, but from its users. We have allowed the specialists to develop in separate silos, not communicating. Wallach and Allen are committed to building machines that are capable of telling right from wrong. Back in the human world, we face the challenge of moulding human beings who are capable of telling right from wrong. We are trying to deal with outcome of a decision to invade a sovereign country in the absence of convincing evidence and in defiance of international law. In addition, the actions of ‘‘scumbag millionaires’’ in financial services may take decades to rectify. We now understand that, with the best will in the world, our belief in the fundamental stability of systems can be ill-founded. Alan Greenspan, Chairman of the Federal Reserve Bank, had believed that capitalist market systems were self-correcting, through the combined effects of individual self-interested decisions. This error, which he committed with other leaders of western capitalism, has cost trillions of dollars. It has punctured the twin myths of R. Ennals (&) Kingston University, Kingston, UK e-mail: richard.ennals@blueyonder.co.uk