Defining the Emerging Notion of ‘Meaningful Human Control’ in Autonomous Weapon Systems (AWS) 2016

The emerging notion of ‘Meaningful Human Control’ (MHC) was suggested by NGO Article 36 as a possible solution to the challenges that are posed by Autonomous Weapon Systems (AWS). Various states, NGOs and scholars have welcomed this term. However, the challenge is that MHC is not defined in international law and as of present, there is no literature that extensively or normatively defines it. In this paper, I seek to discuss questions that I consider helpful in defining the MHC. Control that is exercised by humans over weapons they use has been changing in nature and degree. In the beginning, weapons were mere tools in the hands of fighters who exercised direct control. With the invention of technology, there has been considerable automation of control that was previously exercised by humans. The invention of drones has seen remote control of weapons, making it possible for humans to project force while thousands of miles away from the target. On the horizon are AWS, robotic weapons that once activated, do not need any further human intervention. In the case of AWS, humans seem to be ‘surrendering’ or delegating control of weapons to computers. In as much as this may seem convenient, efficient and safe, it raises far reaching concerns. For that reason, many scholars and organisations are insisting that MHC over weapons must be maintained. In order to define MHC, I propose that the international community must ask the following questions: i. What is the purpose of MHC? ii. Who should exercise that MHC over weapons and when? Is it manufacturers, programmers, the individuals who deploy them or all of them? iii. Over what aspects of AWS should one exercise MHC?In answering the above questions, I note that one of the major concerns is that AWS may create a legal responsibility vacuum. For that reason, I suggest that MHC exercised by humans over AWS should be of such a nature that the weapon user is potentially responsible for all ensuing actions of the robots. To define the nature of control that allows responsibility, I consider the international law jurisprudence on the notion of ‘control’ as the basis for responsibility. I point out that such control should be exercised over the ‘critical functions’ of AWS, in particular, those that relate to decision-making. There are already disagreements in the AWS debate as far as what decision-making means. I therefore discuss how that word should be defined as a step towards the definition of MHC.I note there are various actors involved in the development and deployment of AWS. The fundamental question is whether each actor needs to exercise MHC or whether the term should be defined as a cumulative concept – summing up the different roles that are played by designers, roboticists, programmers, manufacturers, states and combatants. I argue that if MHC is meant to be a legal standard upon which the responsibility for use of AWS is determined, then one of the common mistakes among debaters is the attempt to define MHC without a specific actor in mind. The suggestion that the definition of MHC should be a standard focussing on a specific actor is not to imply that there should be only one standard and all other actors should be forgotten. Rather, the term MHC should zero in on each actor, producing separate definitions and standards to which the different actors should adhere to. Because the control that is exercised by the aforementioned actors is subject to different standards, the test for the meaningfulness of the control exercised by each of them ought to be different.