Increasing autonomy in weapon systems is the focus of growing international attention, as are the implications of artificial intelligence for international security. The algorithms that make increasing autonomy in weapon systems possible are not spared from bias. Therefore, it is critical to develop a better understanding of how biases influence outcomes in learning systems. What can we learn about bias from other fields where decisions with significant human impact are already made by learning algorithms? What do we already know about detecting bias—both unintentional and intentional? How could we know in which ways algorithms are biased? Is all bias bad? And are there specific issues concerning bias that we need to be mindful of in relation to discussions at the upcoming GGE on Lethal Autonomous Weapon Systems?
Support from UNIDIR's core funders provides the foundation for all of the Institute's activities.
In addition, dedicated project funding was received from the Government of Germany.
This Conference is the part of Project(s): Autonomous Weapon Systems: Understanding Bias in Machine Learning and Artificial Intelligence ,
The Weaponization of Increasingly Autonomous Technologies (Phase III)