Banning Autonomous Killing

Scientific research on fully autonomous weapons systems is moving rapidly. At the current pace of discovery, such fully autonomous systems will be available to military arsenals within a few decades, if not a few years. These systems operate through computer programs that will both select and attack a target without human involvement after the program is activated. Looking to the law governing resort to military force, to relevant ethical considerations, as well as the practical experience of ten years of killing using unmanned systems (drones), the time is ripe to discuss a major multilateral treaty banning fully autonomous killing. Current legal and ethical principles presume a human conscience bearing on decisions to kill. Fully autonomous systems will have the capacity to remove a human conscience not only to extreme distance from a target -- as drones do now -- but also to remove the human conscience from the target in time. The computer of a fully autonomous system may be programmed years before a lethal operation is carried out. Without nearer term decisions by human beings, accountability becomes problematic and without accountability, the capacity of law and ethics to restrain is lost.