Autonomous weapons systems defy rigid attempts at arms control


The post-Cold War paradigm of globalisation came to a violent end in February 2022 with Russia’s full invasion of Ukraine, and a dangerous, uncertain era is now unfolding, sparking a new global arms race. Much of this military spending is supporting the manifestation of lethal autonomous weapons systems (AWS) – commonly known as killer robots. Indeed, the mass adoption of this technology now appears inevitable.

All the precursors are in place. The Chinese Communist Party’s doctrine of civil-military fusion is aligning domestic commercial tech innovations with Beijing’s expansive military ambitions. The Pentagon’s Replicator programme, announced last year, also aims to deploy thousands of AWS in multiple domains by the end of 2025. America’s navy recently demonstrated an uncrewed gunboat attacking a fake enemy target using live rockets – without any hands-on direction from a human operator. Desperate to gain an edge in their grinding war of attrition, Ukrainian and Russian forces are reportedly both already using artificial intelligence (AI)-powered drones capable of killing without oversight.

These are just the most evident examples of the technology’s progress. Its rapid advancement is adding urgency to talks around how autonomous weapons can be controlled – or whether they should be wielded at all.

UN Secretary General António Guterres has for years described AWS as “politically unacceptable and morally repugnant”. A resolution adopted in December 2023 by the UN General Assembly tasked Guterres with surveying countries on their opinions and publishing a report examining the various legal, ethical and humanitarian dilemmas surrounding killer robot use.

Guterres is seeking to have negotiations around placing strict constraints on autonomous weapons concluded by 2026. But this ambitious timeline ignores basic realities. Amid today’s dysfunctional multilateral environment, the prospects for a globally binding treaty regulating intelligent weapons are fading, fast. UN delegates have already debated the topic for a decade yet remain at an impasse. And even if the majority of nations can find consensus, there’s no guarantee a treaty would change the trajectory of the technology’s uptake.

Wary of the security implications of a fragmenting international order, major military powers will almost certainly remain holdouts. They have little incentive to ratify voluntary comprehensive limits on assets promising such diverse strategic value based on risks that remain partially hypothetical. A Russian diplomat told a UN forum on arms control last year: “We understand for many delegations the priority is human control. For the Russian Federation, the priorities are somewhat different.” Plus, plenty of nation states are readily violating humanitarian laws and treaties with little consequence.

But this doesn’t mean the looming proliferation of AWS will necessarily lead to the type of algorithmic killing fields many fear. Alternative approaches, outside of a global treaty, can still work to ensure they stay within the realm of meaningful human control.

Deadly machines useful in world at war

According to a database maintained by Sweden’s Uppsala University, armed conflict worldwide has surged to levels rivalling the Cold War’s twilight. There are several reasons for this. Renewed great power enmity has caused a breakdown in multilateralism and eroded norms around the use of force. Aspiring regional powers are thus liberated to meddle in forgotten wars to their benefit, often via proxies. The internet and a diffuse global economy also enables non-state actors to organise and acquire weapons or dual-use technologies more easily. None of these dynamics will recede anytime soon.

The war in Ukraine has meanwhile been a proving ground for drone warfare and smart weapons platforms. Force quantity still matters in modern conflict. But deficits in manpower and munitions can now be partially offset by the use of digitally networked intelligence gathering and expendable machines. In 2022, Kyiv’s use of homegrown AI-driven target acquisition platforms – built with the aid of American defence tech companies – enabled a smaller, outgunned Ukrainian military to repel Russian forces’ initial blitz across the border. Despite lacking a navy, Ukraine has been able to inflict devastating losses on Russia’s flotilla of warships in the Black Sea by deploying marine drones laden with explosives. This has allowed Ukraine to keep shipping lanes open to export its agricultural commodities – a lifeline for its war-battered economy.

Governments, including those in liberal democracies, have taken note. The combination of wars raging in Ukraine and the Middle East, and China’s menacing of Taiwan, are prompting nations worldwide to acknowledge the necessity of hard power. According to the Stockholm International Peace Research Institute think tank, global military spending shattered records last year, rising by 6.8% to US$2.4tn. This largesse – whether it flows from the coffers of Beijing, Canberra, Delhi, London, Moscow, Paris, Seoul, Tehran, Tokyo, Washington or elsewhere – is catalysing a collective evolution in unmanned systems. These range from autonomous drone swarms, robot assault dogs and intelligent anti-ship and anti-air defence systems to AI fighter pilots, self-driving attack vehicles and more.

Autonomous weapons systems also feature prominently in the interests of venture capital groups. Many of them see a profitable future in a more hostile world.

Waves of cheap drones can effectively burn through an enemy’s stockpiles of expensive weapons. Killer robots may also play constructive roles in militarised border disputes, reducing the risk of conflict by enhancing deterrence. South Korea, for example, has installed autonomous sentry guns along the perimeter of its demilitarised zone with North Korea as a bulwark against a ground attack by Pyongyang. Likewise, wargamers posit that defending against a future Chinese invasion of Taiwan would require a large, multi-layered garrison of autonomous anti-aircraft and anti-ship systems around the self-governed island.

Integrating machines into combat units could make it easier to shield more human personnel from the frontlines. And at a time when many nations, including European countries, are mulling the return of conscription, the use of autonomous systems presents a possible means to mitigate flagging military recruitment numbers.

Michèle Flournoy, a career US defence official who served in senior roles in both the Clinton and the Obama administrations, told the BBC in December 2021 that “one of the ways to gain some quantitative mass back and to complicate adversaries … planning is to pair human beings and machines.”

Risk versus reward – and there’s plenty of risk

Make no mistake: the strategic value of autonomous weapons is inextricable from their risks. AI is remarkably adept at executing narrowly defined tasks, even if those tasks are incredibly complex. What AI remains generally unable to do is negotiate ambiguity by using intuition or common sense to adapt to novel situations not factored into its training data.

This presents a major problem when it comes to AWS: it’s impossible for humans to conceive of every scenario that might unfold in a conflict zone and pre-program that into a weapon system’s design.

“You have to script out what they should do in what context, and the machine learning components are typically about sensing the environment, sensing a target profile – but the decision is not context-appropriate,” Laura Nolan, principal software engineer at Stanza Systems and UK member of the Campaign to Stop Killer Robots, told a House of Lords AI committee last year. “You’re asking the commanders to anticipate the effects of an attack that they do not fully control or cannot fully anticipate.”

Identifying targets based on their visual appearance is an especially fraught situation. One cautionary example is how a civilian carrying a shovel can resemble a militant holding a rifle. A farmer carrying a rifle for self-defence can mirror the image of a terrorist. A soldier that’s surrendering still presents the outward appearance of a combatant. In each scenario, an autonomous weapon – even one properly adhering to its programming parameters – might choose to kill an innocent person.

The use of AI-powered tools by the Israel Defense Forces (IDF) in its war on Gaza has also dulled the argument that intelligent weapons systems will render the use of force more precise. An investigation by Israeli magazine +972 published in April details how the IDF has relied on an AI system called Lavender to help target Hamas fighters for airstrikes based on synthesising various data points. It reports that Lavender self-generated a list of roughly 37,000 individuals in Gaza as being supposed Hamas militants. Most of these targets were allegedly vetted by IDF operators for only 20 seconds before being approved for elimination – often by being bombed in their homes, surrounded by their family and friends.

Autonomous weapons systems will also always be vulnerable to countermeasures. One executive involved in Ukraine’s decentralised drone industry recently told Wired magazine how a peer company designing an autonomous machine gun had its radio signals jammed, causing it to fire indiscriminately.

Ensuring human control by other means

Founded in 2013 by a coalition of civil society organisations and disarmament groups, the Campaign to Stop Killer Robots is now composed of 250-plus member organisations worldwide. The policies it proposes – which are backed by dozens of countries – follow a “two-tiered” approach. AWS “that do not allow for meaningful human control” by relying on sensor input alone to target humans should be banned outright, they argue, while limited concessions could be made for systems that can demonstrate meaningful human control.

These are laudable positions. They also aren’t likely to gain universal traction. Military historian Phillips Payson O’Brien, chair of strategic studies at St Andrews University in Scotland, has argued an uncomfortable fact: “Programming machines to decide when to fire on which targets could have horrifying consequences for non-combatants. It should prompt intense moral debate. In practice, though, war short-circuits these discussions.”

However, in the absence of binding international laws, there may still be ways to control AWS. The key will be allowing for flexibility in what is meant by “meaningful human control”. In other words, an alternative – albeit imperfect – solution may be to forge a series of acceptable norms, best practices and bi-lateral understandings between countries or rival political blocs. An early example of this is the US government’s political declaration on the responsible military use of AI. Since being released in November 2023, it has been endorsed by 50 countries, including the UK.

This approach should encourage the proactive creation of de-escalation mechanisms in potential conflict hotspots. These may stop future military accidents, which are inevitable, from becoming full-blown catastrophes. Two separate rounds of talks in May between high-level officials from the US and China – first in Vienna on the risks of AI and later at the Shangri-La defence forum in Singapore, where the two countries’ defence chiefs agreed to keep military communication lines open – illustrate the kind of dialogue that should be replicated frequently, and on a global scale.

An expert on the emerging use of AI in combat, Paul Scharre – a former elite US army soldier and now the director of studies at the Center for New American Security – wrote earlier this year that autonomous weapons might be acceptable if they are limited operationally in terms of “geography, time and the targets being attacked”.

This echoes recommendations from the Campaign to Stop Killer Robots. He further suggests that countries pool knowledge on the safest methods for testing AI-powered weapons to limit the risk of deadly errors. Proponents and critics of autonomous systems alike generally agree that if AWS are complicit in atrocities, the military officer that ordered their use could be prosecuted under existing international humanitarian law. 

Scharre and others have also praised the US and UK for their decision to exclude AI systems from the command-and-control of their nuclear weapon arsenals. A unified UN Security Council position on this, they say, could soothe fears in an age of heightened nuclear anxiety.

After hosting a UN conference on autonomous weapons in late April, the government of Austria released a statement saying “the fact that the international situation is difficult does not absolve us from the political responsibility to address the challenges of autonomous weapons systems. This requires us to build partnerships across states, and regional bodies, UN entities, international organisations, civil society, academia, the tech sector and industry.”

This is absolutely true. Yet stakeholders should be vigilant in ensuring such efforts don’t allow perfection to be the enemy of good. It took 60 years of widespread anti-personnel landmine use for the international community to rally together to create the Ottawa Treaty. But it still hasn’t been signed by China, India, Russia, Saudi Arabia, the US and others. This underscores how, in the fractious decades ahead, a mosaic of non-binding measures around AWS will likely deliver meaningful human control over their use faster and more effectively than the pursuit of an all-encompassing global agreement.



Source link