Continuous advances in the field of artificial intelligence (AI) and efforts to integrate AI systems in critical sectors are gradually transforming all aspects of society, including in the defence sector. Although advancements in AI present unprecedented opportunities to augment human capabilities and improve decision-making in various ways, they also present significant legal, safety, security and ethical concerns. Thus, to ensure that AI systems are developed and used lawfully, ethically, safely, securely and responsibly, governments and intergovernmental organisations are developing a range of normative instruments. This approach is broadly known as "Responsible AI", or ethical or trustworthy AI. Presently, the most notable approach to Responsible AI is the development and operationalisation of responsible or ethical AI principles.
UNIDIR's project Towards Responsible AI in Defence seeks to, first, build a common understanding of the key facets of responsible research, design, development, deployment, and use of AI systems. It will then examine the operationalisation of Responsible AI in the defence sector, including identifying and facilitating the exchange of good practices. The project has three main aims. First, it aims to encourage states to adopt and operationalise tools that can enable responsible behaviour in the development and use of AI systems. It also seeks to help increase transparency and foster trust among states and other key AI actors. Finally, the project aims to build a shared understanding of the key elements of Responsible AI and how they may be operationalised, which may inform the development of internationally accepted governance frameworks.
This research brief provides an overview of the aims of the project. It also outlines the research methodology for and preliminary findings of the project's first phase: the development of a common taxonomy of principles and a comparative analysis of AI principles adopted by states.