11 April 2025
The International Committee for Robot Arms Control (ICRAC) values the opportunity to submit our views to the United Nations Secretary-General in response to Resolution A/RES/79/239 “Artificial intelligence in the military domain and its implications for international peace and security.”

Founded in 2009, ICRAC is a civil society organization of experts in artificial intelligence, robotics, philosophy, international relations, human security, arms control, and international law. We are deeply concerned about the pressing dangers posed by AI in the military domain. As members of the Stop Killer Robots Campaign, ICRAC fully endorses their submission to this report, and wishes to provide further detail regarding the concerns raised by AI-enabled targeting.
Increasing investments in AI-based systems for military applications, specifically AI-enabled targeting, present new threats to peace and security and underscore the urgent need for effective governance. ICRAC identifies the following concerns in the case of AI-enabled targeting:
- AI-enabled targeting systems are only as valid as the data and models that inform them. ‘Training’ data for targeting requires the classification of persons and associated objects (buildings, vehicles) or ‘patterns of life’ (activities) based on digital traces coded according to vaguely specified categories of threat, e.g. ‘operatives’ or ‘affiliates’ of groups designated as combatants. Often the boundary of the target group is itself poorly defined. Although this casts into question the validity of input data and associated models, there is little accountability and no transparency regarding the bases for target nominations or for target identification. AI-enabled systems thus threaten to undermine the Principle of Distinction, even as they claim to provide greater accuracy.
- Human Rights Watch research indicates that in the case of IDF operations in Gaza, AI-enabled targeting tools rely on ongoing and systematic Israeli surveillance of all Palestinian residents of Gaza, including with data collected prior to the current hostilities in a manner that is incompatible with international human rights law.
- The increasing reliance on profiling required by AI-enabled targeting furthers a shift from the recognition of persons and objects identified as legitimate targets by their observable disposition as an imminent military threat, to the ‘discovery’ of threats through mass surveillance, based on statistical speculation, suspicion and guilt by association.
- The questionable reliability of prediction based on historical data when applied to dynamically unfolding situations in conflict raises further questions regarding the validity and legality of AI-enabled targeting.
- The use of AI-enabled targeting to accelerate the scale and speed of target generation further undermines processes for validation of the output of targeting systems by humans, while greatly amplifying the potential for direct and collateral civil harm, as well as diminishing the possibilities for de-escalation of conflict through means other than military action.
Justification for the adoption of AI-enabled targeting is based on the premise that acceleration of target generation is necessary for ‘decision-advantage’, but the relation between speed of targeting and effectiveness in overall military success, or longer-term political outcomes, is questionable at best. The ‘need’ for speed that justifies AI- enabled targeting is based on a circular logic, which perpetuates what has become an arms race to accelerate the automation of warfighting. Accelerating the speed and scale of target generation effectively renders human judgment impossible or, de facto, meaningless. The risks to peace and security – especially to human life and dignity – are greatest for operations outside of conventional or clearly defined battlespaces. Insofar as the use of AI-enabled targeting is shown to be contrary to international law, the mandate must be to not use AI in targeting.
In this regard, ICRAC notes that the above systems present challenges to compliance with various branches of international law such as international humanitarian law (IHL), jus ad bellum (UN law on prohibition of use of force), international human rights law (IHRL) and international environmental law. In the context of military AI’s implications for peace and security, jus ad bellum, a framework that prohibits aggressive military actions and regulates the conditions under which states may lawfully resort to the use of force, is the most relevant. In the same manner IHRL is important in this context because it is designed to uphold human dignity, equality, and justice—values that form the foundation of peaceful and secure societies.
Citations
Alvarez, Jimena Sofia Viveros. September 4, 2024. The risks and inefficacies of AI systems in military targeting support. Humanitarian Law and Policy. https://blogs.icrc.org/law-and-policy/2024/09/04/the-risks-and-inefficacies-of-ai-systems-in-military-targeting-support/
Bo, Marta and Dorsey, Jessica. April 4, 2024 Symposium on Military AI and the Law of Armed Conflict: The ‘Need’ for Speed – The Cost of Unregulated AI Decision-Support Systems to Civilians. OpinioJuris. https://opiniojuris.org/2024/04/04/symposium-on-military-ai-and-the-law-of-armed-conflict-the-need-for-speed-the-cost-of-unregulated-ai-decision-support-systems-to-civilians/
Chengeta, Thompson. May, 2024. African Commission for Human and Peoples’ Rights submission to the UN Secretary General Report on Lethal Autonomous Weapons, ASSEMBLY RESOLUTION 78/241, Commissioner Ayele Dersso Focal Point on the ACHPR Study on AI and Other Technologies. 78-241-African_Commission-EN.pdf
Human Rights Watch. September 10, 2024. Questions and Answers: Israeli Military’s Use of Digital Tools in Gaza. https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza
ICRC. 6 June 2019. Artificial intelligence and machine learning in armed conflict: A human-centred approach.
https://www.icrc.org/sites/default/files/document_new/file_list/ai_and_machine_learning_in_armed_conflict-icrc.pdf; published version at International Review of the Red Cross: Digital technologies and war (2020), 102 (913), 463–479.
Schwarz, Elke. December 12, 2024. The (im)possibility of responsible military AI governance. Humanitarian Law and Policy. https://blogs.icrc.org/law-and-policy/2024/12/12/the-im-possibility-of-responsible-military-ai-governance/