SVG
Commentary
Foresight

Artificial Intelligence and Future Warfare

koichiro_takagi
koichiro_takagi
Japan Chair Fellow (Nonresident)
Paratroopers assigned to the 3rd Brigade Combat Team, 82nd Airborne Division conduct a systems check on their VROD/VMAX electronic warfare equipment before conducting a tactical exercise at the recently concluded Network Integration Exercise. Paratr... (Photo Credit: U.S. Army)
Caption
Paratroopers assigned to the 3rd Brigade Combat Team, 82nd Airborne Division conduct a systems check on their VROD/VMAX electronic warfare equipment before conducting a tactical exercise in El Paso, Texas, on November 20, 2018. (US Army Photo)

Artificial intelligence (AI) is one of the military technologies that major powers have paid the most attention to in recent years. The United States announced its Third Offset Strategy in 2014, which sought to maintain US military advantage through the use of advanced military technologies such as AI and unmanned weapons. The US National Security Strategy released on 12 October 2022 listed AI as one of the technologies that the United States and its allies should promote investment and utilization.

In contrast, in 2019, China announced a new military strategy, Intelligentized Warfare, which utilizes AI. Officials of the Chinese People's Liberation Army (PLA) have stated that it can overtake the US military by using AI. In a televised speech in September 2017, Russian President Putin said that the first country to develop true AI will rule the world. Thus, major countries have shown great interest in the military use of AI in recent years, and there has been a race to develop it.

The Russo-Ukrainian war is the first conflict in history in which both sides use AI.1 Russia used AI to conduct cyber-attacks and create deep-fake videos showing President Zelensky surrendering. Meanwhile, Ukraine is also using facial recognition technology to identify Russian agents and soldiers, as well as analyzing intelligence and planning strategies. However, AI is not a technology that should necessarily be used without limitation in the military. The most famous warning comes from physicist Stephen Hawking: AI may bring about the end of humankind. As symbolized by the SF film Terminator, a future in which weaponized AI loses control and revolts against humans has been repeatedly spoken of. This article argues what are the advantages of the military use of AI, how is it going to be used in future wars, and what dangers have been identified. This article also identifies four very different aspects of the military use of AI: faster information processing, faster decision-making, autonomous unmanned weapons, and use in cognitive warfare. When discussing military applications of AI, some discussions focus on only some of these aspects. This paper discusses these aspects comprehensively.  

Why the F-86 Outperformed the MiG-25 in the Korean War

AI is an enhancement or replacement for the human brain. Weapons developed so far in human history have enhanced human muscles, eyes, and ears. Compared to primitive humans fighting with clubs, modern humans have far more powerful killing power, see their enemies thousands of miles away, and communicate with their allies thousands of miles away.

However, this is the first time in the long history of human warfare that the brain has been enhanced. The changes brought about by AI could therefore be unprecedented and distinctive.

Even before the practical application of AI, the speed at which the human brain processes information and makes decisions was the most important factor in determining who wins or loses a war. It was John Boyd of the US Air Force who first theorized this. Based on his own experience of air combat in the Korean War, Boyd believed that the speed of a pilot's decision-making, rather than the performance of the fighter aircraft itself, could make the difference between winning and losing a battle.

In September 1951, about a year after the start of the Korean War, the Chinese Air Force began participating in the Korean War as the People's Volunteer Army. The new Soviet-made MiG-15 fighter aircraft used by the Chinese Air Force at this time were overwhelmingly superior to the US F-86 Sabre in terms of aircraft performance such as climbing altitude, propulsion and turning performance, and the power of their machine guns.

However, it was the less capable US F-86 that won the battle, with a shootdown ratio of 3.7 to 1.2 The reasons for this were the F-86's superior radar and the fact that the F-86 had good visibility from the cockpit, allowing the pilot to see in all directions. In contrast, visibility from the cockpit of the MiG-15 was extremely poor.

Boyd¡¯s experience serving in the Korean War as a new pilot led him to perfect a new military theory in the 1960s and 1970s. Boyd's OODA loop was the first theory to argue for the importance of speed of decision-making in war.

According to his theory, combat is won or lost not on the performance of the weapon, but on the speed of the OODA (Observation, Orientation, Decision, and Action) loop. In other words, the F-86, with its good pilot visibility, had a superior speed of decision-making, whereby the pilot quickly assesses the situation and takes action. This theory has since been used in business and other areas. 

When Boyd developed this theory, AI had not yet been developed. However, Boyd showed that if humans could process information and make decisions quickly, they could win battles, even with inferior weapon performance.

However, Boyd's theory was derived from a relatively simple action: piloting a fighter aircraft. In subsequent wars, it became clear that there were limits to human information-processing capacity when the theory was applied to an entire military operation involving tens or hundreds of thousands of soldiers, weapons, and sensors operating organically.

Failures of Network-Centric Warfare

Many people have described the 1991 Gulf War as the first space war. The US military used reconnaissance satellites, communications satellites, and Global Positioning System (GPS) satellites to gather accurate information on the Iraqi army and shared the collected information over communications networks. The US military also used precision-guided weapons to carry out precise attacks against Iraqi forces.

New theories on decision-making speed developed in the 1990s when this method of warfare became possible. The leading theory was Network Centric Warfare (NCW), proposed by Arthur Cebrowski of the US Navy in 1998. He argued that networking military organizations using rapidly developing information and communications technology would enable rapid decision-making and that overwhelming victory could be expected due to superior decision-making speed.3

Cebrowski's theory was inspired by the business model of the fast-growing retailer Wal-Mart in the 1990s. Wal-Mart's collection of payment information from cash registers, the real-time sharing of payment information with producers and distributors, and the resulting optimized control of production and delivery were revolutionary at the time. Applying this to military organizations, Cebrowski proposed that networking military organizations would enable optimal and rapid decision-making.

In 2001, President George W. Bush began to realize Cebrowski's theory immediately after taking office. NCW was the theoretical core of the transformation of the US military at the time. In the course of this reorganization, the United States entered into the wars in Afghanistan and Iraq.

However, in these two wars, NCW was not always successful. One of the factors was information overload and inadequate information processing capability. 

During the wars, US military command centers located in Kuwait and Qatar collected large amounts of information from satellites, manned and unmanned aircraft, a wide variety of radars and sensors, and many field units. However, the information collected at headquarters in war is diverse, complex, and contains a large amount of duplication, ambiguity, and inconsistency, and is not a simple data set like Wal-Mart payment information. The information processing technology of the time was not capable of automatically processing such complex data. Naturally, the US military command was also unable to properly extract and process a large amount of information gathered.

Cebrowski's NCW theory came to an end with the departure of President Bush, but the development of theories to speed up decision-making has continued since then. 

Improvement of Information Processing

The US Department of Defense (DoD) is currently developing a theory of Mosaic Warfare. At its core is the concept of Decision Centric Warfare (DCW),4 which aims at the relative superiority of decision-making speed, and its aim is similar to those of Boyd and Cebrowski's theory. DCW differs significantly from the previous concepts in that it makes use of AI and unmanned weapons. DoD is also developing Joint All-Domain Command and Control (JADC2), which uses AI to process data collected by a large number of sensors to support the commander's decision-making. 

Information-gathering systems used in modern warfare include satellites, manned and unmanned aircraft, ground and undersea radars and sensors, the number and variety of which are increasing. In addition, open-source data can be collected from the internet and other sources. Furthermore, the volume of data sent from each of these information-gathering devices is exploding, for example, the resolution of images taken by satellites has improved dramatically. AI has made it possible to process the vast amounts of data coming from these information-gathering systems. In addition, AI can detect correlations between many different data sets, enabling the detection of changes that would otherwise go unnoticed by humans. 

The PLA is also focusing on the improvement of its information processing capabilities using AI. For example, they are building a network of unmanned weapons and undersea sensors in the waters around China and are using AI to process information from these networks.5 Furthermore, they are considering a new form of electronic warfare that uses AI to analyze received radio signals and optimize jamming.6 

As discussed above, the speed of decision-making is an extremely important factor in deciding who wins or loses a war, and various theories have been developed on this subject. However, the Iraq and Afghanistan wars revealed a lack of capacity to process large amounts of information. And now that AI can process large amounts of data, rapid decision-making is becoming a reality. 

The Dangers of Flash War

Given that speed of decision-making is the most important factor in deciding who wins or loses a war, if AI not only processes information but also makes decisions itself, AI can be expected to make quick decisions that are incomparably faster than those made by humans. Furthermore, AI is free from biases, heuristics (intuition, preconceptions), fears and other emotions, and fatigue inherent in the human brain, and can be expected to make objective and accurate decisions even in the extreme conditions of war.

However, one of the problems with delegating decision-making in war to AI is the risk of flash war, where critical decisions, such as starting a war or launching nuclear missiles, are made at a moment's notice.7 

Throughout history, major national leaders have invariably taken restrained decisions when there is a risk of nuclear war. US President Harry Truman dismissed UN Commander Douglas MacArthur, who advocated the use of nuclear weapons in the Korean War. President Dwight Eisenhower did not send US troops to the Hungarian uprising in 1956. President John F. Kennedy did not stop Soviet troops from building the Berlin Wall. In these situations, if AI performs calculations mechanically, there is a risk that AI will come to the conclusion to launch an early pre-emptive strike in order to deter the other side from taking action and creating a strategic advantage. 

A number of studies in the United States have identified problems with AI making strategic decisions, such as launching nuclear missiles, since the 2010s.8 Many of these studies have argued that the use of AI increases the risk of nuclear war. 

Invariably, these studies cite the anecdote of the Soviet lieutenant who saved the world: in 1983, a Soviet automatic warning system detected nuclear missiles flying from the United States.9 According to their manual, the Soviet military had to launch its nuclear missiles for counterattack before the US nuclear missiles landed. However, one Soviet air force lieutenant suspected a false alarm from the automatic warning system and voided the nuclear launch order. In fact, the automatic warning system had misdetected the light reflecting off the clouds, and the lieutenant prevented the destruction of the world by nuclear weapons.

Given this, the prevailing view in western countries is that the role of AI should be only to support human decision-making and that humans should make the final decision. The Decision Centric Warfare currently being developed by DoD also states that the role of AI will be to support human decision-making, for example, AI will create operational plans and propose them to the commander.

AI may be able to create better operational plans than humans. For example, AlphaGo surprised professional Go player Lee Sedol by playing a move that a human would never play. Similarly, AI may create outlandish plans of maneuvering that humans would never come up with. However, it should be humans who approve and implement them.

JADC2, which is being developed by the US military, also aims to reduce engagement times by using information collected from a large number of sensors to identify targets for attack, and the AI recommends to the commander the weapons that will attack those targets.10 AI could also predict weapon failures in advance and recommend maintenance units to carry out maintenance at the optimum time. Furthermore, AI could predict in advance the supplies needed by field units and recommend the optimum amount of supplies and transport plans to supply units and transport units.11

AI Dominates Human Decision-Making

However, even if the role of AI is limited to supporting human decision-making and humans make the final decision, there is still a risk of human judgment being dominated by AI. Many studies often cite the downing of a US Navy aircraft F/A-18 by a US surface-to-air missile unit in the 2003 Iraq War. In this incident, the automated system of the US surface-to-air missile Patriot misidentified a friendly aircraft as an enemy aircraft. The human operators, who had to make a decision in just a few seconds, fired the missile in accordance with the automated system's indication, killing the pilot of a friendly F/A18.12

What this case shows is that in the stressful situation of combat, and in the short time available to make a decision, humans are forced to rely on the judgment of machines. Psychologists have also demonstrated that as trust in machines increases, humans will trust them, even when evidence emerges that the machine's judgment is incorrect.13

As discussed above, delegating decision-making itself to AI is expected to reduce the time required for decision-making and provide objective judgments that are not influenced by biases, heuristics, and fatigue inherent in humans. However, a number of studies point to the dangers of flash wars of instantaneous escalation and the risk of human decision-making being dominated by AI.

Unmanned Weapons Outperform Humans in Decision-Making Speed

Unmanned weapons have a long history, with Israel putting them to practical use in the 1970s. During the 1991 Gulf War, the US military used unmanned reconnaissance aircraft in live fire. The use of unmanned weapons increased dramatically during the Iraq War when the US military increased its use of unmanned weapons from 150 in 2004 to 12,000 in 2008. 

Human operators remotely operated these early drones, which had limited AI autonomy. Nevertheless, unmanned weapons had numerous advantages and their use spread rapidly.

Unmanned weapons can operate for continuous periods of time, at high altitudes or in deep water, and make rapid turns at high speeds, which would not be possible with a human crew on board, consequently, can achieve unparalleled kinetic performance. Furthermore, unmanned weapons can carry out missions in hazardous locations without risking the lives of the pilot or crew. The absence of a crew means that living space, life support, and safety equipment are not required, leading to cheaper manufacturing and smaller airframes. Smaller aircraft are less visible on radar and can operate undetected.

However, remote control by human operators requires a communication link between the unmanned weapons and the human operators, which would render the weapons unmaneuverability if the communication links are disrupted. In the Russo-Ukrainian war, the Ukrainian unmanned weapons Bayraktar TB-2 were active in the early stages of the war. However, Russian electronic warfare units gradually jammed the remotely piloted TB-2.14 

In contrast, AI-equipped unmanned weapons that operate autonomously can continue to operate in the face of jamming. In addition, the US military is currently facing an overwhelming shortage of manned aircraft pilots and unmanned aircraft operators, and autonomous weapons will help to resolve the personnel shortage.15

Above all, if an AI pilots an unmanned weapon, it will operate significantly faster. As in the match-up between the US F-86 and the Soviet MiG-15 in the Korean War mentioned above, decision-making speed is the most important factor in warfare. For this reason, when human remotely piloted weapons and AI-autonomous unmanned weapons are pitted against each other, human operators cannot compete with autonomous unmanned weapons which have overwhelmingly fast decision-making speeds.

Further development of autonomous technology for unmanned weapons could lead to the realization of swarms which consist of numerous unmanned weapons. For example, a swarm of ants is self-organized as if it were a single living organism, carrying food in a procession and building a complexly shaped nest, even if there are no individual ants to direct it. Similarly, if a large number of autonomous unmanned weapons were to self-organize, a powerful weapon swarm could be created that would continue to carry out its mission as a whole swarm, even if communications are disrupted or some individuals are lost under attack.

Advanced AI technologies are already making unmanned weapons more autonomous. In recent years, even remotely piloted unmanned aerial vehicles are increasingly utilizing AI for autonomous take-off, landing, and normal flight, leaving human operators to concentrate on tactical decisions, such as selecting attack targets and executing attacks. For example, the US military's X-47B unmanned stealth combat aerial vehicle successfully auto-landed on an aircraft carrier in 2013 and successfully refueled in the air under autonomous control in 2015. As such autonomy advances, given the overarching mission such as protecting an aircraft carrier from enemy attack, unmanned weapons will be able to conduct it autonomously. 

Regulations That Only Tie the Hands of the Good People

However, the prevailing view in Western countries is that unmanned weapons that attack autonomously without human intervention are unethical and should be regulated.16 The US DoD established guidelines on the autonomy of weapon systems and set certain standards.17

The danger of autonomous unmanned weapons running amok, as in the future depicted in the science fiction film Terminator, has also been raised. The Swedish philosopher Nick Bostrom's thought experiment is often cited in relation to this argument. Suppose we have a fully autonomous robot that creates paper clips. This robot would find the most effective means of achieving its goal and carry it out. This robot gradually begins to destroy human civilization, using up all the resources on the planet to make paperclips, and humanity perishes. This robot will eventually expand into space and overrun the cosmos.

International efforts are also underway to regulate Lethal Autonomous Weapons Systems (LAWS). However, there has been little progress in concrete efforts, with China opposing the regulation and Russia opposing the discussion itself, and there was little progress in reaching a consensus even on what should be regulated. 

A dilemma exists in the regulation of AI-equipped autonomous weapons. As discussed above, speed of decision-making is of paramount importance in winning or losing a war. For this reason, regulation of autonomous weapons reduces the most important capability of decision-making speed, and countries that do not comply with the regulation may unfairly benefit. In other words, there is a danger that the regulations will only tie the hands of good people. 

Controlling Human Cognition with Deep Fakes

China and Russia are attempting to use AI from a completely different perspective to faster information processing, AI decision-making, and autonomous unmanned weapons as mentioned above. This is the use of AI in cognitive warfare, in which the cognition of the human brain is affected and the will of the opponent is influenced to create a strategically advantageous environment or to bring the opponent to its knees without a fight.

Qi Jianguo, former deputy chief of staff of the PLA, has stated that if the PLA gains an advantage in the development of AI technology, the PLA will be able to control human cognition, which is the lifeline of national security.18 However, it is not clear how exactly the PLA intends to control human cognition.

A common possible method would be to use deep fakes, i.e., videos, images, and audio that have been falsified or generated using AI. Examples include Russia's attempts to spread on social media a fake video of President Zelensky calling for people to stop fighting and surrender at the beginning of the Russo-Ukrainian war.

Similarly, there are concerns that China may use AI-based language generation and other methods to create social media content, which could be used to manipulate public opinion in Taiwan or to try to discredit the US military operations supporting Taiwan.19

Indeed, when U.S. Speaker of the House Nancy Pelosi visited to Taiwan in August 2022, China spread fake news that PLA's Su-35 fighter jets had violated Taiwan's airspace. Also in 2020, the Chinese government systematically spread the theory that the virus causing COVID-19 was a US biological weapon, using state media and social media.20

Attempts to spread disinformation and create false perceptions among people have existed for many years. A frequently cited example of past success is the disinformation campaign launched by the Soviet Union in 1983, which claimed that the human immunodeficiency virus (HIV) was a man-made virus created by the US military. This disinformation spread around the world and remade the perception of many people. A survey shows that even today, 48% of African Americans believe that HIV is a manmade virus.

However, it took the Soviet Union 10 years to spread disinformation about HIV around the world. In comparison, AI-based deep fakes can create large amounts of disinformation in a short time. Furthermore, bots that operate automatically on the internet allow this disinformation to spread to a large number of people.

Another method is to confuse the opponent's decision-making through disinformation, thereby speeding up the relative decision-making of the allies. Decision Centric Warfare, currently being developed by DoD, uses electronic warfare weapons, unmanned aerial vehicles, and other means to confuse the enemy and place a decision load on the enemy command, thereby gaining relative decision-making speed superiority.

Will AI or Human Intelligence Determine Future Warfare

Whereas conventional weapons have enhanced human muscles, eyes and ears, for the first time in human history, AI enhances the human brain and could realize the most important aspect of warfare¡ªrapid decision-making.

Many strategists have argued that the characteristics of warfare change with technological progress, but the nature of warfare remains constant. For example, Carl von Clausewitz's theory that war is a battle of wills on both sides and the existence of a fog of war have been regarded as immutable nature of war.

However, some theorists argued that this immutable nature of warfare would be changed by science and technology. For example, in the 1990s, when numerous satellites, numerous, and sensors were used in warfare, some theorists argued that the fog of war will be lifted. They argued that the traditional battlefield, which was shrouded in fog with no information on the enemy, would be cleared by the accurate information provided by satellites and other technologies.

However, the wars in Iraq and Afghanistan in the 2000s revealed these claims to be false. Many militaries who served in the wars testified that the battlefield was still shrouded in fog. The information available on the battlefield was incomplete, sometimes mutually contradictory, and there was information overload with more information coming in than could be processed.  

In the first place, Clausewitz's concept of a fog of war does not refer to a lack of information, but to uncertainty and chance in war. In the extreme conditions of war, people are often unable to make accurate decisions due to fear or fatigue. There have been many instances since ancient times where a single soldier ran away from a battle out of fear, leading to the total collapse of hundreds or more troops without a fight. Clausewitz described the fog of war as including the uncertainties and coincidences that accompany human nature, such as fear.

Thus, those who claimed in the 1990s that the fog of war would lift were criticized for not understanding Clausewitz's theory. And it became clear once again that the nature of war as pointed out by Clausewitz is unchanging as long as the actors in war are human beings.

In the 2010s, with the practical application of AI, there were renewed indications that the nature of war could change. In May 2017, US Deputy Secretary of Defense Robert Work said that artificial intelligence may change the nature of war.21 AI is not associated with the fear and fatigue inherent in humans and can process large amounts of information accurately in short periods of time. AI information processing, decision-making, and autonomous unmanned weapons, therefore, appear to be unrelated to the fog of war.

However, AI itself could be the source of a new fog of war. The dangers of flash wars, human decision-making being dominated by AI, and autonomous unmanned weapons running amok, bring new uncertainties and coincidences to war. Thus, many argue that even with the development of AI, as long as it is humans who conduct warfare, elements arising from human nature will remain constant and the nature of warfare will remain unchanged.22 

Underlying this debate is the issue of whether it is science and technology or human intelligence that will determine the future. In 1940, the Germans defeated the French army with the innovative concept of blitzkrieg using tanks. However, at that time, the defeated French were superior in terms of both the number and performance of tanks. In 1870, the Prussian army defeated the French using railways, but the defeated French were superior in terms of both the number and performance of railways.

Thus, throughout history, it has not been the superiority of science and technology itself, but the human intelligence that uses it, that has won or lost wars. Future warfare may be determined not by the science and technology of AI itself, but by the innovativeness of the concepts that utilize it, and by human intelligence and creativity.