US lags behind China and Russia in preparing for 'wars of the future'

On the battlefields of Ukraine, the future of war is quickly becoming its present. Thousands of drones fill the sky. These drones and their operators use artificial intelligence (AI) systems to avoid obstacles and identify potential targets.

AI models also help Ukraine predict where to strike. Thanks to these systems, Ukrainian soldiers destroy tanks and shoot down planes with devastating efficiency. Russian units are under constant surveillance, and their lines of communication are susceptible to disruption by the enemy - as well as Ukrainian ones. Both countries are racing to develop ever more advanced technologies to counter the incessant attacks and overcome the enemy's defences.

The war in Ukraine is hardly the only conflict in which new technologies are changing the nature of warfare. In Myanmar and Sudan, both rebels and the government are using unmanned vehicles and algorithms as they fight. In 2020, an autonomous drone made in Turkey and used by Libyan government-backed forces struck retreating fighters - possibly the first unmanned drone attack. In the same year, the Azerbaijani military used Turkish- and Israeli-made drones, along with "looting" munitions (explosives designed to hover over a target), to seize the disputed enclave of Nagorno-Karabakh. And in Gaza, Israel has thousands of drones linked to AI algorithms that help Israeli troops navigate the enclave's cities.

In a sense, nothing is surprising about the pace of this development. War has always spurred innovation. But today's changes are unusually rapid and will have a much greater effect. Future wars will no longer be about who can muster the most people or have the best planes, ships and tanks. Instead, they will be dominated by increasingly autonomous weapon systems and powerful algorithms.

Unfortunately, this is a future for which the United States is not prepared. US troops are not fully prepared to fight in an environment where they can rarely take advantage of the element of surprise. Its planes, ships and tanks are not equipped to defend against drone attacks. The military has yet to embrace AI. The Pentagon doesn't have enough initiatives to address these gaps -- and its current efforts are moving too slowly. Meanwhile, the Russian military has deployed many AI-powered unmanned aerial vehicles in Ukraine. And in April, China announced its biggest military realignment in nearly a decade, placing a new emphasis on building capabilities entirely driven by technology.

If it is to remain a leading world power, the US will need to change course quickly. The country needs to reform the structure of its armed forces. The US armed forces must reform their tactics and leadership development. It needs new ways to buy equipment. The US needs to buy new types of equipment and needs to better train soldiers to operate drones and use AI.

American politicians, accustomed to running the most powerful defence apparatus in the world, may not like the idea of ​​such a systemic reform. But robots and artificial intelligence are here to stay. If the US fails to lead this revolution, malicious entities equipped with new technologies will become more likely to attempt to attack the country.

When they do, they may succeed. Even if Washington prevails, it will find itself increasingly surrounded by military systems built to support autocracies and deployed with scant regard for liberal values. That's why the United States needs to transform its armed forces so it can maintain a decisive military advantage — and ensure that robots and AI are used ethically.

The nature of war is probably unchanging. In almost every armed conflict, one side seeks to impose its political will on the other through organized violence. Battles are fought with imperfect information. Militaries must contend with ever-changing dynamics, including within their ranks, between themselves and their governments, and between themselves and ordinary people. The troops experience fear, bloodshed and death. These realities are unlikely to change even with the introduction of robots.

But the nature of war - how armies fight, where and when they fight, with what weapons and leadership techniques - can change. It can change in response to politics, demographics and the economy. Yet few forces bring about more change than technological development. The invention of saddles and horseshoes, for example, helped create cavalry in the ninth century BC, which expanded the battlefield beyond the flat spaces required for chariots and into new types of terrain. The introduction of the longbow, which could fire arrows at great distances, allowed defenders to penetrate heavy armour and destroy advancing armies from afar. The invention of gunpowder in the ninth century AD led to the use of explosives and firearms; in response, the defenders built stronger fortifications and placed greater emphasis on the production of weapons. The effect of technology became more pronounced with the Industrial Revolution, which led to the creation of machine guns, steamships and radios. It eventually led to the creation of motorized and armoured vehicles, aircraft and missiles.

Military performance often depends on how well they adapt and embrace technological innovation. During the American Revolution, for example, the Continental Army fired muskets at the British in massed volleys, then advanced with bayonets fixed to the muskets. This tactic was successful as the Continental forces were able to bridge the gaps between the opposing lines before the British could reload. But by the Civil War, muskets had been replaced by rifled barrels, which required much less time to reload and were more accurate. As a result, the defending armies were able to destroy the advancing infantry. Generals on both sides adjusted their tactics - for example by using snipers and defensive fortifications such as trenches. Their decisions paved the way for trench warfare in World War I.

Technological adaptation also proved essential to World War II. On the eve of this conflict, all advanced countries had access to the then-new technologies of motor vehicles, armoured tanks, aircraft and radio. However, the German army is an innovator in bringing these components together. Her new military doctrine, commonly referred to as blitzkrieg ("lightning war"), involved aerial bombardments that disrupted communications and supply lines, followed by armoured vehicle and infantry attacks that broke through Allied lines and then moved far behind them. As a result, the Germans managed to conquer almost all of Europe in 18 months. They were stopped at Stalingrad, but only by the Soviet army, which was prepared to take huge losses.

To respond, the Allies must develop similar tactics and formations. They had to exemplify what Schmidt called "innovative power": the ability to invent, adapt, and adopt new technologies faster than competitors. Eventually, they were able to mechanize their forces, develop better ways of communicating, use massive amounts of air power, and in the case of the Americans, create and use the world's first nuclear bombs. Thus they managed to defeat the Axis on several fields at the same time.

The Allied effort was incredible. And yet they were still close to defeat. If Germany had managed its industrial capacity more effectively, made better strategic choices, or outgunned the US with atomic weapons, Berlin's initial innovation advantage might have proved decisive. The outcome of World War II may now seem predetermined. But as the Duke of Wellington had said of the result at Waterloo more than a century earlier, he came within a hair's breadth of victory.

It is often difficult for military planners to predict which innovations will define future battles. But today it is easier to make predictions. Drones are ubiquitous, and robots are increasingly being used. The wars in Gaza and Ukraine have shown that AI is already changing the way states fight. The next major conflict is likely to see the full integration of AI into every aspect of military planning and execution. AI systems could, for example, simulate thousands of times different tactical and operational approaches, which would dramatically shorten the period between preparation and execution. The Chinese military has already created an AI commander who has supreme authority in large-scale virtual war games. Although Beijing prohibits AI systems from making decisions in real-world situations, they could take lessons from their many virtual simulations and feed them to human decision-makers. Ultimately, China may give AI models the power to make elections, as can other countries. Soldiers could be drinking coffee in their offices, watching screens far from the battlefield, while the AI ​​system controls all kinds of robotic war machines. Ukraine is already looking to outsource as many dangerous frontline tasks as possible to robots to conserve scarce manpower.

Until now, automation has focused on naval power and air power in the form of sea and air drones. But it will soon turn to land warfare as well. In the future, the first phase of any war is likely to be fought by ground robots capable of everything from reconnaissance to direct attacks. Russia has already deployed unmanned ground vehicles that can launch anti-tank missiles, grenades and drones. Ukraine uses robots to evacuate casualties and dispose of explosives. The next generation of machines will be guided by AI systems that use the robots' sensors to map the battlefield and predict attack points. Even when soldiers do eventually intervene, they will be led by aerial drones with a first-person view that can help identify the enemy (as is already happening in Ukraine). They will rely on the machines to clear minefields, absorb the first volleys of the enemy and reveal hidden opponents. If Russia's war in Ukraine spreads to other parts of Europe, the first wave of ground robots and aerial drones could enable both NATO and Russia to monitor a wider frontline than humans alone can attack or protect.

The automation of warfare could prove essential to saving civilian lives. In the past, wars were fought and won in open areas where few people lived. But as global urbanization draws more people into cities and non-state actors turn to urban guerrilla tactics, the decisive battlegrounds of the future are likely to be densely populated areas. Such fights are much deadlier and much more resource-intensive. Therefore, they will require even more robotic weapons. The military will need to deploy small, manoeuvrable robots (like robot dogs) on the streets and flood the skies with drones to take control of urban positions. They will be guided by algorithms that can process visual data and make split-second decisions. Israel has helped pioneer such technology, deploying the first real swarm of drones in Gaza in 2021. These individual drones bypassed Hamas defences and communicated through a weapons system with the AI ​​to make collective decisions about where they should go.

The use of unmanned weapons is essential for another reason: they are cheap. Drones are a much more affordable class of weapons than traditional military aircraft. The MQ-9 Reaper drone, for example, costs about a quarter of the price of an F-35 fighter jet. The MQ-9 is one of the most expensive such weapons; a simple first-person surveillance drone can cost as little as $500. A team of ten such drones can immobilize a Russian tank for $10 million in Ukraine. (Over the past few months, more than two-thirds of the Russian tanks that Ukraine has disabled have been destroyed by such drones). That affordability could allow countries to send out swarms of drones — some for surveillance, some for attack — without worrying about exhausting them. These swarms could defeat the old air defence systems, which were not designed to shoot down hundreds of objects at the same time. Even when defence systems prevail, the cost of swarm defence will greatly exceed the adversary's cost of attack. Iran's massive drone and missile strike against Israel in April cost at most $100 million, but US and Israeli interception efforts cost more than $2 billion.

The availability of these weapons will, of course, make it much easier to attack - and in turn, empower frugal non-state actors. In 2016, Islamic State (ISIS) terrorists used low-cost drones to counter US-backed advances in the Syrian city of Raqqa and the Iraqi city of Mosul, dropping grenade-sized munitions from the sky and making it difficult for the Syrian Democratic Forces to build positions to fight the fighters. Today, Iranian-backed insurgents are using drones to attack US air bases in Iraq. And the Houthis, the military group that controls much of Yemen, have been sending drones to strike ships in the Red Sea. Their attacks tripled the cost of shipping from Asia to Europe. Other groups may soon join the action. For example, Hezbollah and al-Qaeda in the Middle East could engage in more regional attacks, as could Boko Haram in Nigeria and al-Shabaab in other parts of Africa.

Drones are also helping groups outside the Middle East and Africa. A coalition of pro-democracy and ethnic militias in Myanmar is using retooled drones to fight the military junta's once-fearful air force. It now controls over half of the country's territory. Ukraine also used drones to great effect, especially in the first year of the war.

In the event of a Chinese landing attack, drones could also help Taiwan. Although Beijing is unlikely to launch a full-scale attack on the island in the next few years, Chinese President Xi Jinping has ordered his country's military to be capable of invading Taiwan by 2027. To stop such an attack, Taiwan and its allies would have to hit a huge number of invading enemy assault ships within a very short period. Unmanned systems on land, sea and air may be the only way to do this effectively.

As a result, Taiwan's allies will have to adapt the weapons used in Ukraine to a new type of battlefield. Unlike the Ukrainians, who fought primarily on land and in the air, the Taiwanese will rely on underwater drones and autonomous sea mines that can move quickly into battle. And their aerial drones will need to be capable of longer flights over larger stretches of ocean. Western governments are working on developing such drones, and as soon as these new models are ready, Taiwan and its allies should mass-produce them.

No country is fully prepared for future wars. No country has begun to produce the necessary hardware for robotic weapons on a large scale, nor has it created the software needed to fully power the automated weapons. But some countries are ahead of others. And unfortunately, America's adversaries are in many ways in the lead. Russia, after gaining experience in Ukraine, has greatly increased its drone production and is now using unmanned vehicles to great effect on the battlefield. China dominates the global commercial drone market: the Chinese company DJI controls about 70% of the world's commercial drone production. Because of China's authoritarian structure, the Chinese military has proven to be particularly adept at pushing through changes and adopting new concepts. One of these, called "multi-domain precision warfare," involves the People's Liberation Army's use of advanced intelligence, reconnaissance and other new technologies to coordinate firepower.

When it comes to AI, the United States still has the best systems and spends the most on them. However, China and Russia are rapidly gaining momentum. Washington has the resources to stay ahead of them, but even if it maintains that lead, it may find it difficult to overcome bureaucratic and industrial obstacles to deploying its inventions on the battlefield. As a result, the US military risks fighting a war in which its first-class training and superior conventional weaponry will not be effective enough. For example, American troops are not fully prepared to operate on a battlefield where their every move can be observed and where they can be quickly targeted by drones hovering overhead. This inexperience would be particularly dangerous on open battlefields like those in Ukraine, as well as in other Eastern European countries or the vast expanses of the Arctic. The U.S. military would also be particularly vulnerable on urban battlefields, where enemies can more easily cut U.S. lines of communication and where many U.S. weapons are less useful.

Even at sea, the US will be vulnerable to advances by its adversaries. Chinese hypersonic missiles could sink US aircraft carriers before they leave Pearl Harbor. Beijing is already deploying artificial intelligence surveillance and electronic warfare systems that could give it a defensive advantage over the US across the Indo-Pacific region. In the air, the capable but expensive F-35 can fight swarms of low-cost drones. As well as the heavily armoured Abrams and Bradley tanks on the ground. Given these unfavourable facts, American military strategists rightly concluded that the era of "shock and awe" campaigns, in which Washington could destroy its adversaries with overwhelming firepower, was over.

In order not to become obsolete, the US military must carry out serious reforms. It can start by changing the processes for acquiring software and weapons. The current procurement process is too bureaucratic, risk-averse and slow to adapt to the rapidly evolving threats of the future. For example, it relies on 10-year supply cycles, which can tie it to specific systems and contracts long after the underlying technology has developed. Instead, it should enter into shorter contracts whenever possible.

Similarly, the US should seek to buy from a wider range of companies than it normally uses. In 2022, Lockheed Martin, RTX, General Dynamics, Boeing and Northrop Grumman received more than 30% of all DoD contract funds. In contrast, the new arms manufacturers received next to nothing. Last year, less than 1 per cent of all Defense Department contracts went to venture capital-backed companies, which tend to be more innovative than their larger peers. These percentages should be much more even. The next generation of small and cheap drones is unlikely to be developed by traditional defence firms, which are incentivized to produce advanced but expensive equipment. They are more likely to be created, as in Ukraine: through a government initiative that supports dozens of small start-ups.

To adapt to the future, however, the US will need to do more than simply reform the way it buys weapons. They must also change the army's organizational structures and training systems. The Army must make its complex hierarchical chain of command more flexible and give greater autonomy to small, highly mobile units. These units must have leaders trained and empowered to make important combat decisions. Such units will be more agile—a critical advantage given the fast pace of AI-driven warfare. They are also less likely to be paralyzed if adversaries cut their lines of communication to headquarters. These units must be connected to new platforms, such as drones, to be as effective as possible. (Autonomous systems can also help improve training.) The US Special Forces are a possible model for how these units might operate.

This new era of warfare will have normative advantages. Advances in precision technology could lead to a reduction in indiscriminate aerial bombing and artillery attacks, and drones could save soldiers' lives in combat. But the number of civilian casualties in Gaza and Ukraine cast doubt on the idea that conflicts are becoming less deadly overall — especially as they move into urban areas. And the rise of AI warfare opens Pandora's box of ethical and legal questions. An autocratic state, for example, could easily use AI systems designed to gather intelligence in combat and use it against dissidents or political opponents. The Chinese company DJI, for example, has been linked to human rights abuses against Chinese Uighurs, and the Russian-linked Wagner paramilitary group has helped the Malian military carry out drone strikes against civilians. These concerns are not limited to US adversaries. The Israeli military has used an AI program called Lavender to identify potential fighters and target airstrikes on their homes in the densely populated Gaza Strip. The program has little human supervision.

At worst, AI warfare could even threaten humanity. Wargames conducted with AI models by OpenAI, Meta, and Anthropic have found that AI models tend to suddenly escalate to kinetic warfare, including nuclear, compared to human-run games. It doesn't take much imagination to see how things could go wrong if these AI systems were used. In 1983, a Soviet missile detection system misclassified light reflected from clouds as an impending nuclear attack. Fortunately, there was a soldier in the Soviet Army who was in charge of processing the signal and found that the warning was false. But in the age of AI, there may not be a human to check the system's performance. Fortunately, China and the US seem to recognize that they need to cooperate in the field of AI. After their summit in November 2023, US President Joe Biden and his Chinese counterpart Xi Jinping pledged to jointly discuss AI risk and safety issues, and the first round of talks took place in Geneva in May. This dialogue is essential. Even if cooperation between the two superpowers begins on a small scale, perhaps achieving nothing more than establishing a common language regarding the use of AI in warfare, it could lay the foundations for something bigger. During the Cold War, an era of great power rivalry far more intense than the current one between the US and China, Washington and Moscow managed to build a strong regime of nuclear safeguards. And like the USSR, Chinese officials have incentives to cooperate with Washington on new arms control. The US and China have different global views, but neither wants terrorists to get hold of dangerous robots. They may also want to prevent other countries from acquiring such technologies. Great powers that possess great military technology almost always have an overlapping interest in keeping it to themselves.

Even if China does not want to cooperate, the United States must ensure that its own military AI is subject to strict controls. They must ensure that AI systems can distinguish between military and civilian targets. They must be kept under human command. The US must continually test and evaluate systems to confirm that they are working as intended in real-world conditions. The US should pressure other countries - both allies and adversaries - to adopt similar procedures. If other countries refuse, Washington and its partners must use economic restrictions to limit their access to military AI. The next generation of autonomous weapons must be created under liberal values ​​and universal respect for human rights - and this requires aggressive US leadership.

War is hideous, brutal, and often too long. It is an illusion to think that technology will change the basic human nature of conflict. But the nature of war is changing rapidly and fundamentally. The US must also change and adapt, and US officials must do so faster than their country's adversaries. Washington won't get it right, but it should make fewer mistakes than its enemies. | BGNES

------------------

Mark A. Milley and Eric Schmidt, Foreign Affairs