First "AI War": Israel Used World's First AI-Guided Swarm Of Combat Drones In Gaza Attacks
A May 12 Israeli strike on Gaza may have been guided by AI. Image Credit: Nick_ Raille_07/Shutterstock.com
In the ongoing conflict between Israel and the Occupied Palestinian Territory, the Israel Defense Forces (IDF) has deployed AI and supercomputers to identify strike targets in what they are calling the first artificial intelligence (AI) war. In May this year, the IDF used a swarm of AI-guided drones and supercomputing to comb through data and identify new targets within the Gaza Strip. It's thought this is the first time a swarm of AI drones has been used in combat.
The use of AI in drone strikes has seen a surge in warzones, with a recent UN report revealing Libya launched an autonomous weaponized drone attack on Haftar Affiliated Forces last year, the first time an AI-guided drone identified and possibly attacked human targets without human input. Now, the technology appears to have found significant use in the Israel-Gaza conflict, which reportedly saw over 4,400 rockets fired into Israel and 1,500 strikes into Gaza in the 11 days of intense fighting in May.
The exploitation and use of AI effectively in war requires a lot of information. The machine learning systems need to be fed with data collected through satellites, aerial reconnaissance vehicles, and years of ground intel. With that, it can identify targets and predict when and where enemy attacks may occur.
According to the IDF, AI has been utilized heavily over the last two years to pinpoint suspected Hamas locations and to strike strategic targets to remove missile launching sites. They claim it has vastly reduced the length of fighting by sorting through information at a far higher rate than a human counterpart. Footage of Israel's high-tech rocket shield Iron Dome in action also went viral in May, as it wiped out rockets mid-air before they could connect with their intended targets.
The origins of these algorithms are in Israel’s Unit 8200, an Intelligence Corps unit of the IDF that specializes in code decryption and signal intelligence. Reportedly, Unit 8200 created multiple algorithms that used geographical, human, and signal intelligence to pinpoint strike targets, which were then passed to command for ordering a strike.
Without knowing more specific details about the drone swarm's capabilities it's difficult to gauge how significant this development is. However, the increasing use of AI-guided drones is a concern for many, including the UN Security Council and Humans Rights Watch, the coordinators of the Campaign to Stop Killer Robots, which is calling for a preemptive ban on fully autonomous weapons.
"The systems used in this case probably fall quite far short of the large dynamic, intelligent swarms that could someday have a highly disruptive effect on warfare," Arthur Holland of the United Nations Institute for Disarmament Research told New Scientist. "But if confirmed, they are certainly a notch up in the incremental growth of autonomy and machine-to-machine collaboration in warfare."
The U.S. says humans will always be in control of AI weapons. But the age of autonomous war is already here.
The Pentagon says a ban on AI weapons isn’t necessary. But missiles, guns and drones that think for themselves are already killing people in combat, and have been for years.
(Jean-Francois Podevin for The Washington Post)
Picture a desert battlefield, scarred by years of warfare. A retreating army scrambles to escape as its enemy advances. Dozens of small drones, indistinguishable from the quadcopters used by hobbyists and filmmakers, come buzzing down from the sky, using cameras to scan the terrain and onboard computers to decide on their own what looks like a target. Suddenly they begin divebombing trucks and individual soldiers, exploding on contact and causing even more panic and confusion.
This isn’t a science fiction imagining of what future wars might be like. It’s a real scene that played out last spring as soldiers loyal to the Libyan strongman Khalifa Hifter retreated from the Turkish-backed forces of the United Nations-recognized Libyan government. According to a U.N. group of weapons and legal experts appointed to document the conflict, drones that can operate without human control “hunted down” Hifter’s soldiers as they fled.
The U.S., Russia and China say a ban on AI weapons is unnecessary. But growing number of activists and international allies are pushing for restrictions. (Jonathan Baran/The Washington Post)
Drones have been a key part of warfare for years, but they’ve generally been remotely controlled by humans. Now, by cobbling together readily available image-recognition and autopilot software, autonomous drones can be mass-produced on the cheap.
Today, efforts to enact a total ban on lethal autonomous weapons, long demanded by human rights activists, are now being supported by 30 countries. But the world’s leading military powers insist that isn’t necessary. The U.S. military says concerns are overblown, and humans can effectively control autonomous weapons, while Russia’s government says true AI weapons can’t be banned because they don’t exist yet.
But the facts on the ground show that technological advancements, coupled with complex conflicts like the Syrian and Libyan civil wars, have created a reality where weapons that make their own decisions are already killing people.
“The debate is very much still oriented towards the future,” said Ingvild Bode, an autonomous weapons researcher at the University of Southern Denmark. “We should take a much closer look at what is already going on.”
Libya wasn’t the only place drones that can kill autonomously were used last year. Turkey has used the same quadcopters to patrol its border with Syria. When Azerbaijan invaded Armenian-occupied territory in September, it sent in both Turkish- and Israeli-made “loitering munitions” — drones that can autonomously patrol an area and automatically divebomb enemy radar signals. These weapons look like smaller versions of the remote-controlled drones that have been used extensively by the U.S. military in Iraq, Afghanistan and other conflicts. Instead of launching missiles through remote control, though, loitering munitions have a built-in explosive and destroy themselves on impact with their target.
Since they have both remote-control and autonomous capability, it’s impossible to know from the outside whether humans made the final call to bomb individual targets. Either way, the drones devastated Armenia’s army, and the war ended two months later with Azerbaijan gaining huge swaths of territory.
These kinds of weapons are moving firmly into the mainstream. Today, there are dozens of projects by multiple governments to develop loitering munitions. Even as countries like the United States, China and Russia participate in discussions about a treaty limiting autonomous weapons, they’re racing ahead to develop them.
“The advanced militaries are pushing the envelope of these technologies,” said Peter Asaro, a professor at the New School in New York and a co-founder of the International Committee for Robot Arms Control, which advocates for stricter rules around lethal autonomous weapons. “They will proliferate rapidly.”
Over the past decade, cheaper access to computers that can crunch massive data sets in a short time has allowed researchers to make huge breakthroughs in designing computer programs that pull insights from large amounts of information. AI advances have led to machines that can write poetry, accurately translate languages and potentially help scientists develop new medicines.
But debates about the dangers of relying more on computers to make decisions are raging. AI algorithms are only as good as the data sets they were trained on, and studies have shown facial recognition AI programs are better at identifying White faces than Black and Brown ones. European lawmakers recently proposed strict new rules regulating the use of AI.
Companies including Google, Amazon, Apple and Tesla have poured billions of dollars into developing the technology, and critics say AI programs are sometimes being deployed without full knowledge of how they work and what the consequences of widespread use could be.
Some countries, such as Austria, have joined the call for a global ban on autonomous weapons, but U.S. tech and political leaders are pushing back.
Story continues below advertisement
In March, a panel of tech luminaries including former Google chief executive Eric Schmidt, then-chief of web services, now chief executive of Amazon Andy Jassy and Microsoft chief scientist Eric Horvitz released a study on the impact of AI on national security. The 756-page final report, commissioned by Congress, argued that Washington should oppose a ban on autonomous weapons because it would be difficult to enforce, and could stop the United States from using weapons it already has in its arsenal.
“It may be impossible to define the category of systems to be restricted in such a way that provides adequate clarity while not overly constraining existing U.S. military capabilities,” the report said.
In some places, AI tech like facial recognition has already been deployed in weapons that can operate without human control. As early as 2010, the arms division of South Korean tech giant Samsung built autonomous sentry guns that use image recognition to spot humans and fire at them. Similar sentry guns have been deployed by Israel on its border with the Gaza Strip. Both governments say the weapons are controlled by humans, though the systems are capable of operating on their own.
But even before the development of facial recognition and super-fast computers, militaries have turned to automation to gain an edge. During the Cold War, both sides developed missile defense systems that could detect an enemy attack and fire automatically.
The use of these weapons has already had deadly effects.
In March 2003, just days after the invasion of Iraq by the United States and its allies began, British air force pilot Derek Watson was screaming over the desert in his Tornado fighter jet. Watson, a squadron commander, was returning to Kuwait in the dead of night after bombing targets in Baghdad. Another jet, crewed by Kevin Main and Dave Williams, followed behind.
Twenty thousand feet below, a U.S. Army Patriot missile battery’s computer picked up one of the two jets, and decided it was an enemy missile flying straight down toward it. The system flashed alerts in front of its human crew, telling them they were in danger. They fired.
Watson saw a flash and immediately wrenched his plane to the right, firing off flares meant to distract heat-seeking missiles. But the missile wasn’t targeting him. It shot up and slammed into Main and Williams’s plane, killing them before they had time to eject, a Department of Defense investigation later concluded.
“It’s not something I’ll ever forget,” Watson, who left the Royal Air Force in the mid-2000s and is now a leadership coach, recounted in an interview recently. “As a squadron commander, they were my guys.”
Patriot missile crews were warned about operating on autonomous mode, but it took another friendly-fire incident almost two weeks later, when the system shot down and killed U.S. Navy F-18 pilot Nathan Dennis White, for strict rules to be put in place that effectively stopped the missile batteries from operating for the remainder of the war.
Weapons like the Patriot usually involve a computer matching radar signatures against a database of planes and missiles, then deciding whether the object is a friend or foe. Human operators generally make a final call on whether to fire, but experts say the stresses of combat and the tendency to trust machines often blurs the line between human and computer control.
“We often trust computer systems; if a computer says I advise you to do this, we often trust that advice,” said Daan Kayser, an autonomous weapons expert at Dutch peace-building organization PAX. “How much is the human still involved in that decision-making?”
The question is key for the U.S. military, which is charging ahead on autonomous weapons research but maintains that it won’t ever outsource the decision to kill to a machine.
In 2012, the Defense Department issued guidelines for autonomous weapons, requiring them “to allow commanders and operators to exercise appropriate levels of human judgment.”
Though a global, binding treaty restricting autonomous weapons looks unlikely, the fact that governments and weapons companies are stressing that humans will remain in control shows that awareness around the risks is growing, said Mary Wareham, a Human Rights Watch director who for years led the Campaign to Stop Killer Robots, an international effort to limit autonomous weapons.
And just like land mines, chemical weapons and nuclear bombs, not every country needs to sign a treaty for the world to recognize using such weapons goes too far, Wareham said. Though the United States has refused to sign on to a 2010 ban against cluster munitions, controversy around the weapons led U.S. companies to voluntarily stop making them.
Still, the pandemic has slowed those efforts. A meeting in Geneva scheduled for the end of June to get discussions going again was recently postponed.
The U.S. and British militaries both have programs to build “swarms” of small drones that operate as a group using advanced AI. The swarms could be launched from ships and planes and used to overwhelm a country’s defenses before regular troops invade. In 2017, the Pentagon asked for proposals for how it could launch multiple quadcopters in a missile, deposit them over a target and have the tiny drones autonomously find and destroy targets.
“How can you control 90 small drones if they’re making decisions themselves?” Kayser said. Now imagine a swarm of millions of drones.
The U.S. military has also experimented with putting deep-learning AI into flight simulators, and the algorithms have shown they can match the skills of veteran human pilots in grueling dogfights. The United States says AI pilots will only be used as “wingmen” to real humans when they’re ready to be deployed.
Similar to other areas where artificial intelligence technology is advancing, it can be hard to pinpoint exactly where the line between human and machine control lies.
“Just like in cars, there is this spectrum of functionality where you can have more autonomous features that can be added incrementally that can start to, in some cases, really blur the lines,” said Paul Scharre, a former Special Operations soldier and vice president and director of studies at the Center for a New American Security. He also helped draft the Pentagon’s guidelines on autonomous weapons.
Autonomy slowly builds as weapons systems get upgraded over time, Scharre said. A missile that used to home in on a single enemy might get a software upgrade allowing it to track multiple targets at once and choose the one it’s most likely to hit.
Technology is making weapons smarter, but it’s also making it easier for humans to control them remotely, Scharre said. That gives humans the ability to stop missiles even after they’re launched if they realize after the fact they might hit a civilian target.
Still, the demand for speed in war will inevitably push militaries to offload more decisions to machines, especially in combat situations, Kayser said. It’s not hard to imagine opposing algorithms responding to each other faster than humans can monitor what’s happening.
“You saw it in the flash crashes in the stock market,” Kayser said. “If we end up with this warfare going at speeds that we as humans can’t control anymore, for me that’s a really scary idea. It’s something that’s maybe not even that unrealistic if these developments go forward and aren’t stopped.”
The future of war and deterrence in an age of autonomous weapons
Artificial intelligence and autonomous systems will significantly alter the future battlefield and challenge strategists to come up with new models of deterrence.
Innovation in the field of emerging technologies – broadly encompassing developments such as artificial intelligence (AI), robotics, drones, quantum computing, 3D printing, biotech – is evolving at breakneck speed with the potential to have far-reaching consequences on everything from governance and commerce to geopolitics.
When it comes to warfare, many of these critical technologies possess the power to completely upend the terms of human conflict and alter future battlefields.
“AI and robotics will smash the status quo that exists in the world today,” geopolitical futurist Abishur Prakash told TRT World, adding that new technologies will “reduce the gap between advanced military powers and the rest of the world”.
With traditional concepts of state power gradually intertwined with national expertise and investment in AI, a global arms race is already underway, with the US and China at the forefront.
As wider adoption accelerates, conventional notions around deterrence are set to come into question too. What happens to deterrence and escalation when decisions can be made at machine speeds and are carried out by forces that do not risk human lives?
“We will need to rethink the central tenets of deterrence. AI and autonomous systems challenge the way that nuclear and non-nuclear operations are conducted, as well as the way these systems can be held vulnerable to attack,” says Mikhail Sebastian, a London-based political risk analyst specialising in cybersecurity and digital diplomacy.
“At the same time, they offer a new suite of options for deterring nuclear attacks.”
Prakash warns we’ve now reached a point of no return.
“We are exiting the era where the most damaging behaviour could be deterred. Now, as technology gives nations and organisations new capabilities, governments are faced with threats they cannot stop or limit,” he says.
“They can only be managed.”
Autonomous battlefields
If there is one military technology proven to be a gamechanger thus far, it’s drones.
After gunpowder and nuclear weapons, many have referred to automated killer robots as the “third revolution in warfare”.
Late last year amid the pandemic, the Second Nagorno-Karabakh War between Azerbaijan and Armenia amounted to a showcase for autonomous weapons – and provides a glimpse of the battlefield of the future.
Azerbaijan deployed a range of drones, purchased from Israel and Turkey, to rout the otherwise conventionally superior Armenian army in a short space of time. Azeri forces used to devastating effect Israeli-made ‘Harop’ loitering munitions, designed to hover high above the battlefield while waiting to be assigned a target to crash their explosive payload into, earning them the moniker “Kamikaze drones”.
Video shows an Azerbaijani kamikaze drone hitting a bus in Armenia.#Armenia #Azerbaijan pic.twitter.com/9xNA2KvBN4— Insider Paper (@TheInsiderPaper) October 1, 2020
Azerbaijan spent years investing in loitering munitions and accumulated a stock of over 200, while Armenia had only one domestically made model with a limited range. Being the first war won by autonomous weapons, an uptick in interest from national armies acquiring unmanned aerial systems followed shortly after.
In the US, a new report from the National Security Commission on AI discusses how autonomous technologies are enabling a new paradigm in warfighting and urges massive amounts of investment in the field.
Countries are intensely competing to build or purchase cutting-edge drone systems: China and Russia intend to pursue the development of autonomous weapons and are investing heavily in R&D. The UK’s new defence strategy puts AI front and centre, as does Israel.
And a much more transformative drone technology could be just on the horizon.
Advances in Li-ion batteries have given rise to cheaply made miniature quadcopters. Multiple air forces are now beginning to test networked swarms of drones that can overwhelm radar systems.
Sebastian points out that while on its own a single unmanned and autonomous unit is no match for a fighter jet, when algorithmically linked together a fleet of thousands can conceivably overwhelm larger platforms.
“Once refined, low-cost autonomous drones coordinating their actions at machine speed provide a unique coercive tool that undermines high-cost legacy weapon systems, while potentially augmenting the feasibility of an offensive attack,” he told TRT World.
During a live demonstration to celebrate India's 73rd Army Day in New Delhi on January 15, 2021, the Indian military showed off a swarm of 75 drones destroying a variety of simulated targets in explosive kamikaze attacks. (Prakash Singh / AFP/Getty Images)
Possibly the scariest development are autonomous quadcopters equipped with computer vision technology that can recognise and kill a specific target, or so-called assassination drones.
“As opposed to other military drone applications, assassin drones don’t have to be confined to the battlefield. They can lurk as an omnipresent threat outside of wartime,” says Sebastian.
Until now, deterrence has primarily involved humans attempting to affect the decision calculus and perceptions of other humans. But what happens when decision-making processes are no longer fully under human control?
‘How does one deter an event that has not happened yet?’
What sets the new technology arms race apart from the past is AI’s dual-use.
During the Cold War, the development of nuclear weapons was driven by governments and the defence industry. Beyond power generation, there wasn’t much commercial use for nuclear technology.
But that model doesn’t apply anymore.
“The creeping ubiquity of AI means developments in technologies cannot be contained, and they are bound to bleed across the civilian and military realms,” Sebastian notes.
In an article published last year James Johnson, an assistant professor in the School of Law and Government at Dublin City University, argued the dual-use and diffused nature of AI compared to nuclear technology will make arms control efforts problematic.
“When nuclear and non-nuclear capabilities and war-faring are blurred, strategic competition and arms racing are more likely to emerge, complicating arms control efforts,” he wrote.
“In short, legacy arms control frameworks, norms, and even the notion of strategic stability itself will increasingly struggle to assimilate and respond to these fluid and interconnected trends.”
Johnson underscores what is now referred to as the nascent “fifth wave” of modern deterrence (the “fourth wave” followed the Cold War and continues to the present, coinciding with multipolarity, asymmetric threats and non-state actors) is defined by a conceptual break by including non-human agents into deterrence.
It then follows that asymmetric AI capabilities will inform deterrence strategies. To fight autonomous weapons, you need those same weapons – driving actors to adopt these technologies to shore up their defence against autonomous attacks.
The mix of human and artificial agents could affect escalation between actors in the process. In a RAND report, researchers emphasise how widespread AI and autonomous systems could make inadvertent escalation more likely because of “how quickly decisions may be made and actions taken if more is being done at machine, rather than human, speeds.”
Two conflicting sides might equally find it necessary to use autonomous capabilities early to gain a coercive and military advantage to prevent an opponent from gaining the upper hand, raising the possibility of first-strike instability.
These dynamics could have fateful consequences for how wars begin.
“Because of the speed of autonomous systems having to be countered by other autonomous systems, we could find ourselves in a situation where these systems react to each other in a way that’s not predictable,” Sebastian says.
”Before you know it, a rapid escalation leads to a military conflict that wasn’t desirable in the first place.”
Prakash, who is the author of The Age of Killer Robots, believes governments are going to have to rethink deterrence in an era when AI is making military decisions.
“Deterrence has so far revolved around stopping a nation or actor from doing something today. But as nations use technology to predict future events on the world stage – or what I call ‘Algorithmic Foreign Policy’ – a new challenge emerges,” he says.
“How does one deter an event that has not happened yet?”
Prakash adds that because of how integrated and fragile global systems are today, the world is shifting from the threat of being annihilated (nuclear weapons) to the threat of having critical infrastructure targeted.
“Today, a cyber attack that cripples energy, water and supply chains, will create as much if not more damage,” he argues.
Can a new consensus be achieved?
Given the unpredictability of a new era of armed conflict and AI’s inevitable ubiquity in military applications, what actions could be pursued by policymakers to control the risk of unwanted escalation?
The UN Convention on Certain Conventional Weapons, launched in the 1980s to regulate the use of non-nuclear weapons, has been one avenue. But an effort by the body to ban lethal autonomous weapons systems fell apart in 2019, when resistance from the US, Russia, South Korea, Australia, and Israel thwarted any consensus that could have led to a binding decision.
“The old approach of arms control and treaties don’t apply anymore to these systems. We’re talking about software not hardware,” says Sebastian. “Before it was about allocating a certain number of systems. You can’t do that with AI-enabled systems.”
Much like how it was done for nuclear weapons, fresh international treaties must be forged for new weapons technologies.
“We might end up with rules and norms that are more focused on specific use-cases than systems or technologies. For example, there might be an agreement to use certain capabilities only in a specific context or only against machines.”
But powerful states are often sceptical of multilateral forums regulating technologies and narrowing their ability to gain strategic advantage. For now, the prospect for any transnational solution is nowhere on the horizon.
“Agreeing to or implementing any framework will not be easy, especially when there’s a lack of trust between great powers,” adds Sebastian.
Furthermore, the attempt to achieve consensus around AI is likely to highlight moral asymmetries and introduce several dilemmas that could determine the future of deterrence.
In their paper New Technologies and Deterrence: Artificial Intelligence and Adversarial Behaviour, Alex Wilner and Casey Babb claim that while some states might be against providing AI with the right to kill individuals without human intervention, others might not be so hamstrung by those issues.
According to Wilner and Babb, ethical concerns might end up playing a pivotal role in influencing the development of AI and the nature of alliance politics.
“Allies with asymmetric AI capabilities, uneven AI governance structures, or different AI rules of engagement, may find it difficult to work together towards a common coercive goal,” they wrote.
“Allies who differ on AI ethics might be unwilling to share useful training data or to make use of shared intelligence derived from AI. Without broader consensus, then, AI may weaken political cohesion within alliances, making them less effective”.
Source: TRT World