A drone buzzes over a tumultuous warzone, scanning over dusty ruins and disarrayed troops. Unlike the human fighters, who are forced to deal with threats coming at them from every direction, this drone seems more single-minded, at peace with its duty of finding and eliminating a single target. When it finally locates its quarry, the next actions are simple mechanics: lock. Load. Shoot.
While the involvement of such advanced technology in warfare once seemed to be a figment of science fiction, leaps and bounds in artificial intelligence, used in fields such as facial recognition, risk processing, and many others, mean that it has become an inevitable reality. As with many other areas of innovation, the policy and treaties governing it have been left in the rearview, with only weak predictions and insubstantial precedents existing to regulate the usage of such measures. The beauty of AI technology is also its most dangerous aspect: anyone can access it up to a certain point, meaning it has already found its way into the hands of the most powerful governments and far out radicals. Terrorist groups such as the Taliban and Al-Qaeda have long been “early adopters” of newer technologies, resorting to less-tested methods in hopes of obtaining any sort of strategic advantage.
To understand the risks posed by AI, one must first understand how it works. According to the U.S. Department of Defense, “AI refers to the ability of machines to perform tasks that would otherwise require human intelligence, such as recognizing patterns, learning from experience, drawing conclusions, making predictions or generating recommendations.” The capacity to do so comes from a system known as machine learning, which is exactly what it sounds like: computers are “trained” to perform certain operations on known datasets, through which they “learn” how to apply this knowledge to external variables. The more data a machine processes, the more experience it accrues, and the more intelligent it gets. These capabilities are commonly thought of as innocuous; at this point, AI can be used in mainstream applications such as Instagram and Whatsapp to generate custom emoticons, and ChatGPT has become every student’s best friend, regardless of what they study.
Military AI is on an entirely different level. It can include drones equipped with the ability to locate and eliminate individuals of interest, such as the Israeli Harpy Drone, which can be assigned to a certain area to hunt down opponents and dispatch them using a powerful explosive. Such devices have existed for a few years now; in 2018, two small autonomous drones were used to carry out an unsuccessful assassination attempt on Venezuela’s president, Nicolas Maduro. Less spoken-about but equally impactful is AI’s ability to aid in military strategy, as its capacity to process historical and real-time data and recognize recurring patterns makes it a valuable tool for decision-making.
On a surface level, it seems as though AI is capable of doing anything humans can, but on a more efficient level. What, then, makes it so dangerous, to the point where many argue it should merit its own set of governing principles beyond those that currently exist to overlook mechanized defense instruments? The answer can be found in the same science fiction tropes that predicted the rise of the machines far before humanity had the means to actualize it: they lack the humanity–and by extension, ethics–needed to self-regulate. Whereas even the cruelest militant has the capacity to stop themselves from carrying out an operation due to some level of moral apprehension, autonomous artificial intelligence is literally programmed to carry out its task by any means necessary, through the most optimal pathways. Even though their deployment is often governed by human users, the actual execution of their objectives is triggered instantly once an algorithm has obtained the input necessary (the identification of an enemy base, for instance) to produce its desired output, regardless of potential mitigating factors. This loss of subjectivity is what creates humanitarian concerns, and the widespread nature of the technology makes it a global issue.
Currently, the main piece of policy designed to temper the effects of such unprecedented innovation is the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, an initiative launched by the United States government in February of last year during the Military Domain Summit, which is held in the Hague. Endorsed by 56 states thus far, the Declaration includes measures that encourage responsibility in the usage of AI technologies, through “legal reviews”, “proactive steps to minimize unintended bias”, and “transparent” and “auditable” development processes. The document does not address humanitarian concerns brought up by many human rights groups, and also does not account for controlling terrorist exploitation of such devices. At the moment, the American Department of Defense remains at the forefront of transparency measures, publishing numerous policy briefs and strategies over recent months; however, it is unclear if this forthcomingness stems from a desire to procure rights to rapid, unfettered innovation or merely Good Samaritan concerns.
Opponents of AI have suggested multiple means to oversee the rapid growth of the industry. These include standardization of technology to ensure that no country has an unfair advantage, posing a risk to privatized intellectual property laws as well as the regulation of datasets used in machine learning in order to prevent harmful biases from finding their way into computer programming. It is clear that this issue is a nuanced one: AI is the newest step in military technology, following the lines of biochemical warfare and nuclear missiles, for example. However, its accessibility and rapidly evolving nature make it harder to regulate and, by extension, harder to morally police. Unlike other weapons technology, this represents the shift to a new era of mechanical autonomy and a secession from human sentiments. It is clear that new policies must be put into place not only to facilitate the usage of AI, but to police it.