UNITED STATES
bookmark

Military philosophers ponder AI-human control of lethal weapons

The society that separates its scholars from its warriors will have its thinking done by cowards and its fighting done by fools. – Thucydides

The word ‘human’ appears 10 times in the 20-page United States Department of Defense’s DoD Directive 3000.09: Autonomy in Weapon Systems (AWS) that became effective on 25 January 2023. Four times it is part of a technical phrase: for example, “human-machine interfaces” must be “readily understandable to trained operators”.

Six times, ‘human’ is paired with ‘judgment’, for example: “Autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” In one instance, this definition is expanded to include: “In accordance with the law of war, applicable treaties, weapons system safety rules and Rules of Engagement that are applicable or reasonably expected to be applicable.”

The AWS directive binds the United States’ Armed Forces and rules out the Pentagon acquiring Lethal Autonomous Weapons Systems (LAWS) that AI companies may have developed for other countries.

The British Army’s Approach to Artificial Intelligence (October 2023), in a section on human-centric AI, cites that “human responsibility and accountability cannot be removed, irrespective of the level of AI or autonomy in a system”. Like this dicta, these militaries’ insistence that responsibility for AWS actions remains with human decision makers (commanders), determines more than military policy.

It also structures the academic debates about military AI in both service academies like West Point (America’s military academy) and civilian universities in countries such as the United States, Britain, Australia and the Netherlands.

Systems hunting humans is a scary concept

As stressed by the philosophy professors interviewed for this article – four of whom either teach or are retired from teaching at West Point (60 miles north of New York City), the US Air Force Academy (Colorado) or the Naval Postgraduate School (California) – the Hollywood version of a LAWS, such as in Terminator (1984), are just that, Hollywood creations.

“For us humans, walking up stairs or navigating a muddy path is really easy; we don’t think about it. A five-year-old can do it. And, so, we have the assumption that it’s really easy to do. But anyone who knows modern robotics knows how hard it is to get a robot to walk upstairs,” says Professor Adam Henschke, who teaches philosophy at the University of Twente in Enschede in the Netherlands. He is an expert in the ethical and policy issues raised by modern military technology and co-editor of Binary Bullets: The ethics of cyberwarfare (OUP, 2016).

University and government research that has deployed AI to develop cutting edge weapons systems, has also kicked up ethical questions around accountability for killing and other human-machine dilemmas that military philosophers are grappling with.

Both the US Navy’s (USN) Aegis Combat System and the Phalanx Close-In Weapons System (PC-IWS), the latter capable of firing 4,500 20mm rounds per minute, are radar controlled anti-missile systems that engage without a human pulling the metaphorical trigger. In late January or early February, while on patrol near the Bab al-Mandab Strait, between the Red Sea and Gulf of Aden, USS Gravely’s PC-IWS destroyed a cruise missile fired by Houthi rebels four seconds before the missile would have struck the Gravely.

Speaking of the Aegis system, Lieutenant Colonel Kevin Schieman, PhD, who teaches philosophy and military ethics at West Point, told University World News that while it “is capable of fully autonomous engagement, that is generally only used for very limited defensive cases. I don’t think that’s what is making people nervous. What is, is the idea that a system is out there hunting and it’s making decisions about who or what to kill.”

Ethical concerns about AI augmented systems

While the mobile robotic killer ‘soldiers’ are not even on the technological horizon and defensive AWS are accepted, ethicists have raised concerns about AI augmented systems, such as drones, that can be controlled thousands of miles away and LAWS systems. (Though, it should be noted, ethicists have not criticised the Ukrainian use of LAWS systems that, as seen in many videos placed online by the Ukrainians, destroy Russian tanks. Perhaps because, as Henschke puts it, they are akin to “highly advanced landmines” that are governed by strict parameters: that is, they fire only when their optical system sees a tank or other Russian vehicle.)

For some, the ethical questions about both drones directed from afar and LAWS turn on the issue of morality drawn from the Jus in bello (Just war) theory. With roots in the ancient world and enunciated by St Augustine (died 430 AD), Jus in bello includes the concepts of human agency and mercy as well as the notional principle of the proportionate use of force.

Drones controlled from thousands of miles away, the argument goes, are little different from autonomous drones that can loiter over an area until a predetermined target, for example the leader of a terrorist organisation, is seen and executed. Neither admit of the possibility of mercy, such as when a soldier raises their arms in surrender and is – from that point on – no longer considered a legal legitimate target because they have come under the Geneva Convention Relative to the Treatment of Prisoners of War (1949).

The drone operator is, therefore, what philosopher Robert J Sparrow of Monash University in Melbourne, Australia, and Henschke call a “minotaur”. In their 2023 article, “Minotaurs, Not Centaurs: The future of manned-unmanned teaming”, published in Parameters: the US Army War College Quarterly, they define a minotaur as an “unmanned-manned team” because, in this case of the human operated drone, the “eye in the sky” is effectively in charge, even if the metaphorical trigger is pulled by a human.

For, “machines can identify enemy military objects and personnel in (near) real-time by integrating information from multiple sources (such as drones, satellites, video feeds from cameras mounted on weapons or helmets, and signals intelligence)”, they write.

For his part, Schieman drew a distinction between a LAWS and the drone operator. “I think historically we’ve tried to have greater range than our opponents. I’m thinking here of the First Persian Gulf War (1990-91). There were Abrams tanks that recorded kills at 3,000 plus metres. There’s always some aspect of this kind of technology.

“We invest in it because we want to be able to strike the enemy before he can strike us. And, so, if that is removing agency, then, I guess in some sense, it is. I find myself wondering, however, how AI is different in this case,” says Schieman.

Be sure to kill within international humanitarian law

As unpacked by these professors for their students – and explained to University World News – the philosophical and ethical problem becomes even more acute when considering LAWS. There are two reasons for this.

In a manner of speaking, it shouldn’t matter,” says Henschke. “If you’re dead, you’re dead. It doesn’t matter if it’s this or that that killed you.”

“Still, there’s a moral residue about the fact that the person who is making the killing decision is a moral being. And I mean by that, something that has an understanding to respond to moral reasons in a way that properly understands it,” he added.

After rehearsing one of the most common arguments for why LAWS could very well be superior fighters than are humans – “machines don’t get tired, they don’t get angry, they don’t get hungry, and are never scared” – Professor Richard Schoonhoven, who teaches philosophy at West Point, put the ethical issue this way: “There are some people, and I should confess I’m not among them, who think that even if an AI system performs flawlessly, it would somehow still be immoral to allow machines on their own to make decisions to kill people.”

To flesh out this point, I asked the professors whether they remembered the Christmas song, “Snoopy’s Christmas?” This 1967 song by the rock band the Royal Guardsmen is a riff on Peanuts, the cartoon penned by Charles Schulz, in which Charlie Brown’s dog, Snoopy, fancies himself as a British World War One flying ace.

In the song, the German flying ace, the Red Baron, forces Snoopy to land behind German lines on Christmas Eve. In a nod toward the chivalric myth of First World War aerial combat (that is, that pilots were modern day knights), instead of taking Snoopy prisoner, the Baron says: “Merry Christmas, my friend”, offers him holiday toast and allows Snoopy to fly home.

“I think this story captures the moral intuition of the fact that Snoopy and the Red Baron, or two knights, can sit there and they can talk with each other as humans,” says Henschke. “Whether they do or not is almost beside the point. There is a mutual recognition of each other as a moral agent. I think there is an intuitive pull to that line of argument that marks out AI as distinct from human interaction.”

James Cook, US Air Force Brigadier General (Ret) and professor emeritus of philosophy at the US Air Force Academy, summed up his response to “Snoopy and the Red Baron” by saying: “I think we all want to have the possibility of showing each other mercy or justice, depending upon how one reads it, even in the heat of battle.”

However, as Henschke also made clear, chivalry and fair play are not how wars are fought or won. To prove his point, he sketched out an alternative history in which the United States develops the nuclear bomb before Germany surrenders. “Well then,” he says, some would argue that because Germany does not have the bomb, chivalry demands that the United States give them one. An idea that is utterly ridiculous and absurd.”

Or, as Schieman puts it, “my own experience is that war isn’t a place where human dignity resides in great quantities. But I think it’s important that we make sure that we’re killing within international humanitarian law and the ethical principles that motivate those laws”.

How to navigate ethically roiled waters?

How, then, do these professors teach their students to navigate these ethically roiled waters?

While in large measure students in military academies and civilian universities study the same issues, there are differences of emphasis.

West Point cadets, for example, study enough mathematics and computer science so that they have a basic understanding of how AI works. Further, in addition to courses like Scheiman’s Military Ethics course, their curriculum is infused with military ethics. By contrast, computer science students at Twente University take courses like Henschke’s “Computer Ethics: Introduction” in which they study a number of philosophical schools and military ethics.

A key issue covered is ‘automation bias’, which, Sparrow and Henschke write, is the tendency of people “to over trust artificial intelligence, especially if AI has proven itself generally reliable”. This bias can be seen in the stories of people who trust their GPSs so much they drive into a lake or swamp.

Over time, reliance can become normative and ASW systems will come to have psychological effect and institutional ‘force of orders’ – especially, Sparrow and Henschke continue, for small-unit warfighters who “will spend most of their time trying to achieve goals set for them by an AI”.

Schoonhoven teaches cadets the hard truth that: “We don’t really understand how some of these systems work. They’re just too large and tricky to get your heads around.” The reason for this is because there are billions of algorithmic connections, which can be thought of as neurons in the human brain, inside the so-called ‘black box’ of AI systems.

“So, we’re always worried about implicit bias creeping into machines and sometimes giving the right answer for the wrong reason. Accordingly, philosophy professors worry that “as the pace of battle picks up, they [soldiers] will more or less automatically default to the AI system.”

Professor George Lucas, distinguished chair of ethics at the US Naval Academy, brought up a scenario that would not be out of place in a Tom Clancy novel: shots being fired in the South China Straights by either an American or Chinese naval ship’s AWS because of a mistake in code – and not on the orders of a president of either country or even a commander.

“The stakes and scale, the possible harm done in terms of loss of life and equipment. The political fallout of such a mistake is so large,” he says.

But, as Lucas also taught his students, though there will always be gremlins, the USN has never really been in a situation where there is a complete computer system failure.

This is not technical hubris, Lucas made clear a moment later when he told me about how he uses one of the key insights of the late University of Santa Barbara (California) ecologist, Garrett Hardin: regarding the environment, you never do just one thing.

“It’s a deceptively simple phrase, but I think it captures exactly what we’re discussing [that is, automation bias]. Once somebody does something to radically disrupt the wartime environment with regard to the use of technology, whatever they do, it’s going to affect them and others in ways unknown. It will not be what they are intending. It’ll be things they never foresaw,” says Lucas.

AI cannot be moral

In an article published in 2021, Sparrow argued that AI cannot be moral.

The reason is not that a system cannot be trained on moral reasoning and doctrine. Indeed, Sparrow imagined Adam using an app to determine whether to decide to stop all further medical aid for his badly injured father, a decision that will allow his organs to be used to save the lives of three other people. “Adam,” writes Sparrow, “is not wise but foolish to entrust his father’s fate to an app: his thinking is shallow where it needs to be deep – indeed, he can hardly be said to be thinking at all.”

Machines do not have the personality or understanding of human-to-human interactions and the emotions that underpin them. Whatever decision the app makes it cannot have either the life experience or the ability to feel remorse or regret, which Sparrow says are the sine qua non of moral thinking.

To teach West Point’s cadets just how morally blind existing AI systems are – and how difficult it is to integrate autonomous systems without much in the way of moral sensitivity into an operational framework – in his military ethics course, Schieman and his colleagues have teamed up with West Point’s Robotics Research Center.

“The cadets are able to program little robots and we have these toy scenarios set up on the machines. They can put them in different modes of autonomous operation and various fire control [that is, when the robot can fire its gun] measures.

“What we are trying to teach cadets is how to think through the responsibilities that they will have as military commanders. They can’t think of military ethics as something they do above and beyond tactics and leadership. They have to be able to reason through complex problems that carry serious moral risks.

“However military technology develops over the course of their careers, they will bear significant responsibility for making sure that it is used in ways that satisfy discrimination (that is, distinguishing legitimate military targets from civilians) and proportionality,” Schieman told University World News.

Ethical schools of thought

The students who either took or are taking the courses taught by the professors interviewed for this article are exposed to a number of ethical schools of thought to provide them with a common vocabulary with which to talk about the ethics of AWS, and the use of military force in general).

In addition to the Just War tradition, students learn about deontological ethical theory. ‘Deontological’ is the name given to the theory that determines whether an action is right or moral based on rules and principles – and not on the consequences of that action. The most famous deontological philosopher is Immanual Kant (died 1804).

As summarised by Sparrow and Henschke, Kant teaches that “human lives should not be at stake in the decisions of machines” because “Kant insisted that human beings should always be treated as ‘ends’” and not means.

Further, they write: “Unlike machines, humans have free will. According to Kant, we must respect this capacity in each other and avoid treating other people solely as tools to advance our purposes. It is difficult to see how machines could demonstrate such respect and easy to worry that minotaur warfighting could reduce human beings to mere means.”

In what amounts to applied ethics courses, Henschke’s civilian students as well as the students in the military academies also learn about Utilitarianism. Developed by Jeremy Bentham (died 1832) and refined by John Stuart Mill (died 1873), Utilitarianism holds that the proper course of action can be divided by determining if it produces a benefit or advantage to the greatest number.

While at first the belief that one should act to ensure the greatest good to the greatest number seems far from military matters, from guns, strategy and tactics. However, simply naming them here makes clear that it is a commander’s lot to make, essentially, utilitarian decisions to maximise their mission’s success and preserve the lives of their men.

“In terms of teaching students about AWSs, Utilitarianism provides students with a useful and intuitive guide for decision making – while we may not want to kill an enemy soldier, if their death would save many more lives, we have an explanation for why this act of war might be permitted.

“However, ethics generally, and the Just War tradition in particular, recognises that we need to think of more than the ends justifying the means. We cannot intentionally target civilians, or attack surrendering enemy soldiers, even if that might improve our chances of winning. Utilitarianism, where the overall good of an outcome is an important part of moral reasoning, is only part of how we make moral decisions,” Henschke explained.

Additionally, these professors’ students are schooled in Contractarian Ethics. Contractarian ethics extends the argument John Locke made in his Second Treatise on Government (1690), that individuals surrender certain powers to the general government for their protection.

Writers Yitzhak Benbaji and Daniel Statman in their 2019 book War by Agreement: A contractarian ethics of war argue that, through such international instruments as the Geneva Convention or the United Nations Charter, states essentially establish a “war contract”, delimiting what they may and may not do, for example, attack civilians.

“Contractarian ethics affords students a way of conceptualising the sometimes arbitrary seeming nature of the rules of war,” says Schoonhoven.

Contractarian ethics informs, Schoonhoven further explained, how commanders have to ask themselves whether the weapon being deployed, in that particular context, would be both discriminate and proportionate. That is, would it be able to distinguish between combatants and non-combatants, and would any unintentional harm to civilians or civilian objects be proportionate to the direct military advantage to be gained.

A robust defence of liberal arts education

The robust defence of liberal arts education even as AI transforms warfare and command and control, by Henschke and professors at America’s service academies – and which is replicated in countries like Australia, Canada and Britain – might surprise many, especially since in most Western countries the general population has little contact with men and women in uniform.

That the men and women in America’s armed forces be “open minded and be widely read”, as General Mark Milley – who in June 2023 was chairman of the Joint Chiefs of Staff, that is, the nation’s most senior military officer – told a Congressional hearing investigating the teaching of Critical Race Theory in the nation’s military academies, certainly upset Republican senator Tommy Tuberville of Alabama.

Ignoring the long history of military poetry stretching back to Homer’s Iliad, last 6 September Tuberville attacked the US Navy on X: “We’ve got people doing poems on aircraft carriers over the loudspeaker. It is absolutely insane the direction that we’re headed in our military, and we’re headed downhill, not uphill.”

Tuberville was apparently unaware that the Navy’s deck log poetry practice is almost a century old, though its origins are obscure. The website of the Naval History and Heritage Command says, “The first entry of the New Year, written in verse, gives a brief glimpse into the minds of the sailors and shipboard life, and provides a human voice to the otherwise impersonal deck log.”

Nor did Tuberville’s comments indicate that he knew that the Second World War General George A Patton was a devotee of poetry, especially that of the First World War poet Robert Brooke. Patton also wrote more than 80 poems, one of which, as was noted in Foreign Policy in 2016, “was even set to music and broadcast to soldiers in Europe by the American Expeditionary Radio Station” in 1943.

As Nolan Peterson – a retired air force pilot who flew special operations missions and is now a member of the Atlantic Council and has reported from war-torn Ukraine – notes in his response to Tuberville, the US Air Force also prizes poetry and includes many poems written by warrior poets in the cadet handbook Contrails, the contents of which must be memorised by freshmen cadets.

“The bravest soldiers and pilots and sailors and Marines I’ve met rarely postured, and they did not scoff at romantic things. They fought harder and loved harder than everyone else. They were the women and men most tightly bound to their humanity, no matter what they saw and did in war. They were the poets,” wrote Peterson on X on 6 September 2023.

“I’ve read Mao Tse Tung. I’ve read Karl Marx. I’ve read Lenin. That doesn’t make me a communist,” Milley told Congress before asking, rhetorically (in reference to critical race theory and books about ‘white rage’ being on the curriculum at West Point, which he reminded legislators, is a university). “So what is wrong with having some understanding of the county we are here to defend.”

The professors who teach future military leaders – and those who design the AI military men and women will be using – understand the ethical quandaries that AWS (and, more broadly, military use of force) raise. They are rigorous thinkers who find much practical wisdom in philosophical traditions derided by politicians like Tuberville and those who think that soldiers are little more than cannon fodder.