Recent international attention on autonomous weapon systems (AWS) has focused on the implications of what amounts to a ‘responsibility gap’ in machine targeting and attack in war. As important as this is, the full scope for accidents created by the development and deployment of such systems is not captured in this debate. It is necessary to reflect on the potential for AWS to fail in ways that are unanticipated and harmful to humans—a broader set of scenarios that simply those in which international humanitarian law applies.

Of course, any complex, hazardous technology carries ‘unintentional’ risk, and can have harmful results its designers and operators did not intend. AWS may pose novel, unintended forms of hazard to human life that typical approaches to ensuring responsibility do not effectively manage because these systems may behave in unpredictable ways that are difficult to prevent. Among other things, this paper suggests human-machine teams would, on their own, be insufficient in ensuring unintended harm from AWS is prevented, something that should bear on discussions about the acceptability of deploying these systems. This is the fifth in a series of UNIDIR papers on the weaponization of increasingly autonomous technologies.

Citation: Conventional Arms and Ammunition Programme (2016). "Safety, Unintentional Risk and Accidents in the Weaponization of Increasingly Autonomous Technologies", UNIDIR, Geneva.