“My God, what have we done?” This was the now-famous reaction of the Enola Gay co-pilot after dropping the first ever atomic bomb over Hiroshima in 1945. Today we know that, two months ago, scientists had issued the Franck Report advising not to use it, but their views were rejected by decision-makers in the name of military necessity.
It was no coincidence that the first resolution of the United Nations General Assembly, in 1946, decided to create a commission to deal with the problems raised by the atomic age and make proposals on how to ensure its use “only for peaceful purposes”. Later, the International Atomic Energy Agency (IAEA) was established, in 1957, to promote the peaceful uses of nuclear energy, while also discouraging its use for military purposes. In doing so, the IAEA Statute highlighted the contribution of atomic energy to “peace, health, and prosperity throughout the world”.
Governing powerful technologies is a tricky business and artificial intelligence (AI) may face similar challenges in the future. Obviously, the analogy with nuclear weapons is not without limitations: AI is an enabling technology that cannot be reduced to a single, perfectly identifiable device (such as a nuclear warhead), and a catastrophic threat akin to an atomic bomb detonation has been lacking at present. The long-term impact of AI in military affairs, however, has the potential to reshape the character of war in the XXI century. Worse still, weaponizing AI systems displaying growing autonomy may someday lead us, the human race, to the brink of losing control over the use of force.
To be true, the fast-moving militarization of AI has been taking place in a free-for-all security environment in which “anything goes”. Great-power competition has prompted military powers to invest heavily in research and development of high-tech weapons and capabilities, without any constraints. But, even in war, there are rules to be observed. International humanitarian law (IHL) clearly stipulates that the right of parties to a conflict to choose methods or means of warfare “is not unlimited”, in accordance with article 35 of Protocol I to the Geneva Conventions of 1949.
Ironically, despite concerns about a coming clash between major powers on the horizon, AI-enabled autonomous weapons would likely be deployed first in the Global South. Low-intensity and small-scale wars raging in conflict zones in developing countries, including urban warfare, insurgencies, and civil wars, may become a testing ground for these weapons, possibly leading to unexpected fatal engagements, collateral damage, and asymmetric encounters of machines against biological adversaries.
The dangerous combination of remote warfare and depersonalized robotic platforms would increase the likelihood of machines killing people, combatants and civilians alike, in poor countries. If the risk of casualties is almost null for the attacker deploying armed robots, allegations that AI-powered weapons can “save lives” – because they are supposed to be more precise and “risk-free” – seem like a one-sided justification for interventionism for those on the receiving end.
And yet, the international governance of AI remains fragmented at the global level and ongoing attempts at regulating autonomous weapons have been unable to bring about a substantial outcome so far. Discussions in Geneva are notoriously slow in the Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS). The 11 guiding principles adopted by consensus by the high contracting parties to the Convention on Certain Conventional Weapons (CCW), in 2019, should be further developed to build substance towards a normative and operational framework on these weapons.
A vast majority of delegations in the GGE supports an international legally binding instrument that could properly address legal, ethical, and humanitarian concerns posed by LAWS. States will need to negotiate in earnest, sooner rather than later, in order to preserve strategic stability, guard against unacceptable risks, protect human dignity, uphold IHL, and secure human control over the use of force in the long run.
Last May, the International Committee of the Red Cross (ICRC) updated its position on autonomous weapons and recommended new legally binding rules along three lines: a prohibition on unpredictable systems that are designed or used in a manner such that their effects cannot be sufficiently understood, predicted, and explained; a prohibition on systems that are designed or used to apply force against persons; and limits on the types of target, duration, geographical scope, and scale of use, among other requirements.
The ICRC’s approach makes sense to the extent that the inherent unpredictability of self-learning systems is a non-starter in terms of human control (or lack of it). You cannot simply deploy an unpredictable autonomous weapon and “hope” that it would do the job as expected. This is why, when applying lethal force, it is entirely inappropriate for a human operator to release such a weapon in the wild and allow it to proceed all by itself, engaging targets without further human oversight. Any “standard of appropriate human judgment over the use of lethal force“ will not be met in this manner as far as unpredictable systems are concerned.
Weapon systems with autonomous functionalities to kill people are even more appalling. It is ethically imperative that we draw the line somewhere. Should machines decide by themselves who must live and who must die? Prohibiting AI systems to make this determination is the bare minimum from a moral standpoint. In his PhD thesis, military expert Paul Scharre conceded that banning anti-personnel autonomous weapons can be an option to consider when there is no human on the loop. Large-scale industrial production of these weapons would be difficult to hide and their military utility would be low. Their extreme potential to do harm, nevertheless, would strongly recommend utmost restraint.
Even if war robots could become more efficient over time, delegating the moral burden of killing to machines poses fundamental ethical problems. Ethics are linked to human values and the organization of societies. Moral decisions belong only to the individual concerned and cannot be delegated to others. Outsourcing ethics to a machine that is not anchored in a network of human dialogue is contrary to the basic principles of morality, as Boddington argued. Turning moral judgments over to computer software still means transferring ethics from one person to another external entity.
People are at the center of IHL, which exists inter alia to protect them from superfluous injury or unnecessary suffering in armed conflict. Machines are non-human entities by definition and responsibility for the fulfillment of legal obligations by states must not be relinquished to algorithms. Most importantly, we should never delegate to machines life and death decisions. This is another principle with far-reaching implications in both warfare and in the civilian domain. In health care, for instance, the ultimate decision whether to perform euthanasia in a patient belongs to humans only.
United Nations Secretary-General António Guterres has repeatedly warned of the risks associated with the weaponization of AI. He made it clear that the prospect of deploying machines with the power to take lives without human involvement is “politically unacceptable, morally repugnant, and should be prohibited by international law”. That same year, the High-Level Panel on Digital Cooperation, convened by the UN Secretary-General, included in its report the recommendation 3C on AI and autonomous intelligent systems, in which the experts reaffirmed the foundational moral principle that “life and death decisions should not be delegated to machines”.
A few months ago, a European Parliament resolution on international public law and military uses of AI underlined that meaningful human intervention and supervision are essential in the process of making lethal decisions. Since human beings should always be responsible when deciding between life and death, systems without any human oversight, the resolution says, “must be banned with no exceptions and under all circumstances”. The AI research community, civil society, and scholars have also been stressing this point.
It is high time for all states to speak up and champion the principle of peaceful uses of AI systems as a cornerstone of international law. World leaders should endorse the key notion that AI must be developed for the common good and for peace, from an ethical, legal, and human-centered perspective. Governments should commit to this principle unequivocally, at the highest level, for the benefit of all humanity.
There is nothing inevitable about the future. Decisions made today have the power to shape where the world will stand in 10-15 years. Inaction to avoid an unrestrained AI arms race is hardly in our best interest. Delegating life and death decisions to machines has a profound significance that goes much beyond military efficiency. We cannot afford to leave such quintessential decisions to be made in times of war, pressed by dramatic events, particularly against expert advice. God forbid that we should witness another “Hiroshima moment” that human beings will forever regret.
ABOUT THE AUTHOR
Eugenio V. Garcia is a Tech Diplomat at the Consulate General of Brazil in San Francisco, USA. And a Deputy Consul General and Head of Science, Technology, and Innovation. He serves as a focal point for Silicon Valley and the Bay Area innovation ecosystem. And he holds a Ph.D. in International Relations and researcher on the international governance of artificial intelligence.