Discussions in the UN Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS) have repeatedly expressed calls for a deeper understanding of the concept of "human control", which is central to debates on autonomous weapons.
Human-machine interfaces (HMIs), the hardware and software components that ensure the communication between machines and human operators, have been touted as a key element of human control over autonomous weapons systems. Strictly speaking, this is correct, as interfaces allow operators to monitor the state and behaviour of a remote system, as well as to manually (re-)establish control when needed, but the optimal use of HMIs relies on a complex mix of factors. These include context of use, ergonomics/human factors design, personnel training, the complexities of decision-making and human-machine interaction in dynamic environments, and the unique challenges related to AI/ML-based learning systems.
This report analyses HMIs in autonomous systems and explores key considerations of HMIs use, testing and design. Drawing on these, it considers implications for human control and proposes recommendations for the work of the GGE on LAWS.
Germany, the Netherlands, Switzerland, Microsoft