Autonomous Weapon Systems: Friend or Foe?

Samuel Roussos is a student from the CAIDE subject AI, Ethics and the Law. This is an excerpt from his final assignment.

“Let every nation know, whether it wishes us well or ill, that we shall pay any price, bear any burden, meet any hardship, support any friend, oppose any foe to assure the survival and the success of liberty”.[1] Those words spoken by John F. Kennedy at his 1961 inauguration embodied the United States’ foreign policy during the height of the Cold War. However, while hostilities between the two global hegemons seemingly dissipated in 1991, “the Cold War is over but Cold War thinking survives”.[2] Consequently, the global community is facing a new arms race; the race for the development and distribution of Autonomous Weapons Systems (AWS).

Autonomous Weapons
Image by Hasan Almasi @hasanalmasi

The International Committee of the Red Cross (ICRC) defines AWS as “weapons that can independently select and attack targets…weapon systems with autonomy in the ‘critical functions’ of acquiring, tracking, selecting and attacking targets”. [3] With examples of completely autonomous weaponry in warfare today, AWS is no longer exiled to the realms of science fiction. The Iron Dome, an autonomous missile defence system developed by the Israeli military, boasts an 85% accuracy rate, with their new system supposedly increasing accuracy to 100%.[4] Proponents of AWS make the clear argument that given their inherent accuracy, autonomous weaponry will make warfare drastically more ethical. This incentive is exacerbated given the fact that the United States military was responsible for the death of more than 130 civilians in Iraq, Somalia, Syria and Afghanistan in 2019.[5] However, while the ability to be more precise is an enormous benefit, there is a question often overlooked; can an autonomous weapon even identify who to target?

Weapons such as the Iron Dome are examples of defensive weaponry where the identification of a threat, and the criteria for engagement, is binary. In the battlefield however, the identification of combatants is far more nuanced than simply a solider with a weapon. Article 51(3) of Section 1 of the Additional Protocol to the Geneva Convention states that “civilians shall enjoy the protection afforded by this Section, unless and for such time as they take a direct part in hostilities”.[6] Determining whether a civilian is taking direct part in hostilities is inherently a complex legal calculation in and of itself. Deciding whether to engage that person is even more burdensome, and there is no clear widely accepted moral framework to guide this critical decision.

An illustrative example is children utilised by the Taliban in Afghanistan as scouts; legally they are taking a direct part in hostilities, but is it ethical to engage them?[7] A deontological lens looks to apply universal moral principles to which it would find it unethical to engage the child.[8] Deontology asks us to not look at the consequences of an action, but rather operate under a set of rules. If a solider has deemed it ethically wrong to murder a child, they should not fire their weapon under any circumstance, even if the action would save the lives of others. However, examining it from a utilitarian lens and its consequentialist beliefs, engaging the child would potentially save the lives of the soldiers the child is reporting on, and hence justify engagement.[9] This point speaks to a vital pillar regarding engagement; the principle of permissibility, one of the most complex ideas in humanitarian law.[10] The principle “requires that the expected harm to civilians be measured, prior to the attack, against the anticipated military advantage to be gained from the operation”.[11] An action can be justified as long as the benefits exceeds the cost, but given the complexity of this notion, it is nearly impossible for this level of ethical calculus to be undertaken by a machine.

This reason alone should completely remove the need for AWS, however this argument makes one key assumption; are soldiers themselves perfect moral agents? Dictated by emotions such as fear and anger, the standard of identifying combatants is already relatively low amongst real soldiers.[12] Recently, Australian SAS troops have been embroiled in a scandal which uncovered 19 soldiers that partook in the illegal killing of 39 civilians, and torture of two others, in Afghanistan.[13] This is just one case of many different sexual assaults, tortures and murders of non-combatants in warzones across the globe that occur every year.[14] An AWS doesn’t have to be a perfect moral agent, it is just has to better than a human.

War tech Image by Dan Meyers @dmey503

At the end of all this, it is clear that a perfectly moral machine or solider is impossible, and therefore mistakes will occur no matter which means of warfare is used. So, while it can be argued that the metric of success is a machine with a greater ethical code than a human, once AWS are introduced, a vital principle is abdicated; accountability. With clear mechanisms for disciplining soldiers, the line becomes incredibly blurry once a machine is acting on its own accord. Accountability is a pivotal part of governing behaviour in warfare and ensuring proper checks and balances on acceptable conduct, and it is something that cannot be made obsolete.

Many tech oligarchs such as Elon Musk have called upon the United Nations to completely ban the use of Lethal Autonomous Weapons, but irrespective of whether agents are for or against AWS, it is certain that they will continue to be developed. However, what is abundantly clear is that a machine will never be able to entirely act without some degree of human oversight, and it is imperative there is broader discussion pertaining to frameworks for the use of AWS. While the dangers of AWS are not as perspicuous as nuclear weapons, unchecked, AWS presents itself as a pernicious adversary to the world. As former President Kennedy further remarked, “our problems are manmade, therefore, they can be solved by man”.[15]


[1] John Fitzgerald Kennedy, ‘Inaugural Address’ (Speech, United States Presidential Inauguration in Washington D.C., 20 January 1961) <>.

[2] Joseph Rotblat, ‘Remember Your Humanity’ (Speech, Nobel Lecture, 10 December 1995) <>.

[3] International Committee of the Red Cross, ‘AUTONOMOUS WEAPON SYSTEMS TECHNICAL, MILITARY, LEGAL AND HUMANITARIAN ASPECTS’ (Conference Paper, ICRC Expert Meeting in Geneva Switzerland, 26-28 March 2014) 5 [4] <>.

[4] Judah Ari Gross, ‘Decade after 1st interception, new Iron Dome boasts 100% success rate in trials’, The Times of Israel (Online, 12 January 2020) <>.

[5] Thomas Gibbons-Neff, ‘U.S. Military Killed 132 Civilians in Wars Last Year, Pentagon Says’, The New York Times (Online, 7 May 2020) <>.

[6] Protocol Additional to the Geneva Conventions art 51 3

[7] Melissa De Witte, ‘In Drell Lecture, speaker calls for ethics and humanity as militaries expand autonomous weaponry’, Sandford University (Online, 1 May 2019) <>.

[8] Amanda Sharkey, ‘Autonomous weapons systems, killer robots and human dignity’ (2018) 75–87 Ethics and Information Technology 21

[9] See Ibid.

[10]  Christof Heyns, Report of the Special Rapporteur on extrajudicial, summary or arbitrary executions, 23rd sess, Agenda Item 3, UN Doc A/HRC/23/47 (9 April 2013) [70] <>.

[11] See Ibid.

[12] Ronald Arkin, ‘The case for banning killer robots’, 2015 46–47 Communications of the ACM 58

[13] Georgia Hitch, ‘What war crimes did Australian soldiers commit in Afghanistan and will anyone go to jail?, ABC News (Online, 19 November 2020) <>

[14] Andrea Prasow, ‘U.S. War Crimes in Yemen: Stop Looking the Other Way’, Human Rights Watch (Online, 21 September 2020)

[15] John Fitzgerald Kennedy, ‘COMMENCEMENT ADDRESS AT AMERICAN UNIVERSITY’ (Speech, Commencement Address, 10 June 1963) <>.