From the invention of gunpowder to the machine gun and tank, the military has always been an early adopter of the latest technological innovations. Computing has had a major impact on warfare ever since Alan Turing’s Bombe machine was used to crack the German Enigma cyphers in World War II. The driverless taxis from Waymo that ply their trade today on the streets of San Francisco have their roots in the 2004 Grand Challenge project, sponsored by the Défense Advanced Research Projects Agency (DARPA). Today, artificial intelligence (AI) is having a widespread impact on various aspects of the defence industry, from strategy through to operations and in military technology itself.
The US Army has put AI firmly at the heart of its planning. One key aspect is in the military decision-making process, using AI to plan missions, drawing on intelligence reports, terrain information, weather patterns and sensor feeds to help identify targets and forecast enemy movements. Technology allows mission simulations to be run and unit orders to be issued that comply with military doctrine and standards. It should be noted that previous attempts to introduce AI into the process have been made over several decades, and they were abandoned, mostly because the technology did not perform well under battlefield conditions. It is thought that the latest in AI technology may fare better. The idea is to develop a command and control system that will use AI to link sensor data together from different branches of the armed forces into a unified network.
AI has had a better track record in military logistics, with machine learning being used in the Dynamic Analysis and Replanning (DART) tool as far back as 1991 to schedule transportation of supplies. Similarly, AI has been used in radar analysis and missile threat detection systems. Predictive maintenance, where AI models analyse sensor data to predict failing parts, has been used in various industries, including defence, to reduce the cost of replacing parts in equipment. In 2024, the US military alone had almost $2 billion allotted for AI project funding, and 800 AI projects.
One of the areas where AI is being actively employed is in drones, which I have written about here. Another is in cybersecurity, which is clearly an important part of military capability. AI brings both opportunities and threats here, as it can be used for threat detection and to provide autonomous responses, but also brings new vulnerabilities, such as prompt injection in large language models.
There are a number of issues and challenges. The black box nature of artificial neural networks means that the decision-making process of AI recommendations is opaque, which undermines trust. Large language models are heavily dependent on their training data and tend to be erratic when presented with unusual or edge case inputs. This phenomenon is why driverless cars are currently restricted to geo-fenced areas of carefully mapped cities, and are not roaming the countryside.
A major consideration is ethics. There is increasing use of remotely controlled weaponry, from unmanned aerial vehicles to ground drones, already deployed in the Ukraine war. Such platforms have the capability of operating autonomously as well as being operated remotely, an important consideration when electronic warfare is used to jam communications. However, if a drone is used not merely to transport equipment or for reconnaissance, but also is itself armed, then the drone could start shooting at targets of its own volition. There has been considerable debate about the ethics of this. There are already such technologies: the Israeli IAI Harop is a loitering munition that can autonomously search for targets emitting radar signals, choose a target and attack it. The system has a “human in the loop’ for final strike decisions, but the technology is entirely capable of operating autonomously. It can be argued that there is plenty of such technology already deployed: landmines do not ask a human for permission before exploding. Nonetheless, there are obvious legal and ethical concerns about these “lethal autonomous weapon systems”. There is even a concern that politicians may be more willing to enter conflicts if the “soldiers” are mostly robots and drones that do not have grieving families and military funerals. There are already campaigns underway to set restrictions or bans on such weapons.
The pace of technological change and the urgency of wartime situations for driving battlefield innovation mean that it is inevitable that AI will be embedded more and more into actual weapons systems, planning systems, and other military applications. Now is the time to have debates about the ethics of this and how such systems may be controlled, while they are still mostly in development rather than actively deployed.







