Side event to the 2020 GGE on LAWS
In the ongoing debate on LAWS it is widely agreed that autonomous systems in combat functions should do what their operators expect them to do, and must do so for intelligible reasons. But what does it mean for a complex autonomous system to be truly “predictable” or “understandable” (or, conversely, inherently unpredictable and unintelligible)? How does one determine the appropriate degree of predictability and understandability required for the prudent, and legal, use of military AI? And what would it take to ensure that complex intelligent weapon systems reliably meet these benchmarks? This brief expert discussion will illustrate the fundamental science of predictability and understandability, explore its relevance for all stages of the development and use of military AI, and consider a range of possible avenues for action to address the challenges and risks associated with unpredictable and unintelligible autonomous systems.
Dr. Pascale Fung – Director, Centre for Artificial Intelligence Research (CAiRE), Professor of Electronic & Computer Engineering, and Computer Science & Engineering, Hong Kong University of Science and Technology.
Arthur Holland Michel – Associate Researcher, AI and Autonomy, United Nations Institute for Disarmament Research
23 September 13:30 to 14:15 (CEST) online (Zoom)
UNIDIR encourages the participation of representatives and experts specialized or interested in issues pertaining to lethal autonomous weapon systems and other forms of military AI.