Source: opiniojuris.org
Artificial intelligence (AI) systems are computer programs that carry out tasks – often associated with human intelligence – that require cognition, planning, reasoning or learning. Machine learning systems are AI systems that are “trained” on and “learn” from data, which ultimately define the way they function. Both are complex software tools, or algorithms, that can be applied to many different tasks. However, AI and machine learning systems are distinct from the “simple” algorithms used for tasks that do not require these capacities. The potential implications for armed conflict – and for the International Committee of the Red Cross’ (ICRC) humanitarian work – are broad. There are at least three overlapping areas that are relevant from a humanitarian perspective.
Three conflict-specific implications of AI and machine learning
The first area is the use of AI and machine learning tools to control military hardware, in particular the growing diversity of unmanned robotic systems – in the air, on land, and at sea. AI may enable greater autonomy in robotic platforms, whether armed or unarmed. For the ICRC, autonomous weapon systems are the immediate concern (see above). AI and machine learning software – particularly for “automatic target recognition” – could become a basis for future autonomous weapon systems, amplifying core concerns about loss of human control and unpredictability. However, not all autonomous weapons incorporate AI.
The second area is the application of AI and machine learning to cyber warfare: AI-enabled cyber capabilities could automatically search for vulnerabilities to exploit, or simultaneously defend against cyber attacks while launching counter-attacks, and could therefore increase the speed, number and types of attacks and their consequences. These developments will be relevant to discussions about the potential human cost of cyber warfare. AI and machine learning are also relevant to information operations, in particular the creation and spread of false information (whether intended to deceive or not). AI-enabled systems can generate “fake” information – whether text, audio, photos or video – that is increasingly difficult to distinguish from “real” information and might be used by parties to a conflict to manipulate opinion and influence decisions. These digital risks can pose real dangers for civilians.
The third area, and the one with perhaps the most far-reaching implications, is the use of AI and machine learning systems for decision-making. AI may enable widespread collection and analysis of multiple data sources to identify people or objects, assess “patterns of life” or behaviour, make recommendations for courses of action, or make predictions about future actions or situations. The possible uses of these “decision-support” or “automated decision-making” systems are extremely broad: they range from decisions about whom – or what – to attack and when, and whom to detain and for how long, to decisions about overall military strategy – even on use of nuclear weapons – as well as specific operations, including attempts to predict, or pre-empt, adversaries.
AI and machine learning-based systems can facilitate faster and broader collection and analysis of available information. This may enable better decisions by humans in conducting military operations in compliance with IHL and minimizing risks for civilians. However, the same algorithmically-generated analyses, or predictions, might also facilitate wrong decisions, violations of IHL and exacerbated risks for civilians. The challenge consists in using all the capacities of AI to improve respect for IHL in situations of armed conflict, while at the same time remaining aware of the significant limitations of the technology, particularly with respect to unpredictability, lack of transparency, and bias. The use of AI in weapon systems must be approached with great caution.
AI and machine learning systems could have profound implications for the role of humans in armed conflict. The ICRC is convinced of the necessity of taking a human-centred, and humanity-centred, approach to the use of these technologies in armed conflict.
It will be essential to preserve human control and judgement in using AI and machine learning for tasks, and in decisions, that may have serious consequences for people’s lives, and in circumstances where the tasks – or decisions – are governed by specific IHL rules. AI and machine learning systems remain tools that must be used to serve human actors, and augment and improve human decision-making, not to replace them.
Ensuring human control and judgement in AI-enabled tasks and decisions that present risks to human life, liberty, and dignity will be needed for compliance with IHL and to preserve a measure of humanity in armed conflict. In order for humans to meaningfully play their role, these systems may need to be designed and used to inform decision-making at “human speed” rather than accelerate decisions to “machine speed”.
The nature of human-AI interaction required will likely depend on the specific application, the associated consequences, and the particular IHL rules and other pertinent law that apply in the circumstances – as well as on ethical considerations.
However, ensuring human control and judgement in the use of AI systems will not be sufficient in itself. In order to build trust in the functioning of a given AI system, it will be important to ensure, including through weapon reviews: predictability and reliability – or safety – in the operation of the system and the consequences of its use; transparency – or explainability – in how the system functions and why it reaches its output; and lack of bias in the design and use of the system.