Modern battlefields have become a breeding ground for experimental AI weaponry

As conflicts rage across Ukraine and the Middle East, the modern battlefield has become a testing ground for AI-powered warfare. From autonomous drones to predictive targeting algorithms, AI systems are reshaping the nature of armed conflict. The US, Ukraine, Russia, China, Israel, and others are locked in an AI arms race, each vying for technology supremacy in an increasingly volatile geopolitical landscape. As these new weapons and tactics emerge, so do their consequences. We now face critical questions about warfare’s future, human control, and the ethics of outsourcing life-and-death decisions to machines. AI might have already triggered military escalation Back The post Modern battlefields have become a breeding ground for experimental AI weaponry appeared first on DailyAI.

Jul 4, 2024 - 03:00
 13
Modern battlefields have become a breeding ground for experimental AI weaponry

As conflicts rage across Ukraine and the Middle East, the modern battlefield has become a testing ground for AI-powered warfare.

From autonomous drones to predictive targeting algorithms, AI systems are reshaping the nature of armed conflict.

The US, Ukraine, Russia, China, Israel, and others are locked in an AI arms race, each vying for technology supremacy in an increasingly volatile geopolitical landscape.

As these new weapons and tactics emerge, so do their consequences.

We now face critical questions about warfare’s future, human control, and the ethics of outsourcing life-and-death decisions to machines.

AI might have already triggered military escalation

Back in 2017, Project Maven represented the Pentagon’s primary effort to integrate AI into military operations. It aims to enable real-time identification and tracking of targets from drone footage without human intervention.

While Project Maven is often discussed in terms of analyzing drone camera footage, its capabilities likely extend much further.

According to the non-profit watchdog Tech Inquiry’s research, the AI system also processes data from satellites, radar, social media, and even captured enemy assets. This broad range of inputs is known as “all-source intelligence.”

In March 2023, a military incident occurred when a US MQ-9 Reaper drone collided with a Russian fighter jet over the Black Sea, causing the drone to crash.

Shortly before that incident, the National Geospatial-Intelligence Agency (NGA) confirmed using Project Maven’s technology in Ukraine.

Lieutenant General Christopher T. Donahue, commander of the XVIII Airborne Corps, later stated quite plainly of the Ukraine-Russia conflict, “At the end of the day, this became our laboratory.”

Project Maven in Ukraine involved advanced AI systems integrated into the Lynx Synthetic Aperture Radar (SAR) of MQ-9 Reapers. As such, AI might have been instrumental in the drone collision.

Drone
On the morning of March 14, 2023, a Russian Su-27 fighter jet intercepted and damaged a US MQ-9 Reaper drone, resulting in the drone crashing into the Black Sea. It marked the first direct confrontation between Russian and US Air Forces since the Cold War, a significant escalation in military tensions between the two nations. Source: US Air Force.

In the aftermath, the US summoned the Russian ambassador to Washington to express its  objections, while the US European Command called the incident “unsafe and unprofessional.”

Russia denied any collision occurred. In response, the US repositioned some unmanned aircraft to monitor the region, which Russia protested.

This situation presented the menacing possibility of AI systems influencing military decisions, even contributing to unforeseen escalations in military conflicts.

As TechInquiry asks, “It is worth determining whether Project Maven inadvertently contributed to one of the most significant military escalations of our time.”

Ethical minefields

Project Maven’s performance has been largely inconsistent to date.

According to Bloomberg data cited by the Kyiv Independent, “When using various types of imaging data, soldiers can correctly identify a tank 84% of the time, while Project Maven AI is closer to 60%, with the figure plummeting to 30% in snowy conditions.”

While the ethical implications of using AI to make life-or-death decisions in warfare are deeply troubling, the risk of malfunction introduces an even more chilling aspect to this technological arms race.

It’s not just a question of whether we should use AI to target human beings, but whether we can trust these systems to function as intended in the fog of war.

What happens when civilians nearby are labeled as targets and destroyed autonomously? And what if the drone itself goes haywire and malfunctions, traveling into environments it’s not trained to operate in?

AI malfunction in this context isn’t merely a technical glitch – it’s a potential catalyst for tragedy on an unimaginable scale. Unlike human errors, which might be limited in scope, an AI system’s mistake could lead to widespread, indiscriminate carnage in a matter of seconds.

Commitments to slow these developments and keep weapons under lock and key have already been made, as shown when 30 countries joined US guardrails on AI military tech.

The US Department of Defense (DoD) also released five “ethical principles for artificial intelligence” for military use, including that “DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.”

However, recent developments indicate a disconnect between these principles and practice.

For one, AI-infused technology is likely already responsible for serious incidents outside its intended remit. Secondly, the DoD’s generative AI task force involves outsourcing to private companies like Palantir, Microsoft, and OpenAI

Collaboration with commercial entities not subject to the same oversight as government agencies cast doubt on the DoD’s ability to control AI development.

Meanwhile, the International Committee of the Red Cross (ICRC) has initiated discussions on the legality of these systems, particularly concerning the Geneva Convention’s “distinction” principle, which mandates distinguishing between combatants and civilians. 

AI algorithms are only as good as their training data and programmed rules, so they may struggle with this differentiation, especially in dynamic and unpredictable battlefield conditions.

As indicated by the Black Sea drone incident, these fears are real. Yet military leaders worldwide remain bullish about AI-infused war machines. 

Not long ago, an AI-powered F-16 fighter jet out-maneuvered human pilots in a test demo.

US Secretary of the Air Force Frank Kendall, who experienced it firsthand, summed up the inertia surrounding AI military tech: “It’s a security risk not to have it. At this point, we have to have it.”

On the face of it, that’s a grim admission.

Despite millennia of warfare and its devastating consequences, the mere thought of being one step behind ‘the enemy’ – this primal anxiety, perhaps deeply rooted in our psyche – continues to override reason.

Homegrown AI weaponry

In Ukraine, young companies like Vyriy, Saker, and Roboneers are actively developing technologies that blur the tenuous line between human and machine decision-making on the battlefield.

Saker developed an autonomous targeting system to identify and attack targets up to 25 miles away, whereas Roboneers created a remote-controlled machine gun turret that can be operated using a game controller and a tablet.

Reporting on this new state of AI-powered modern warfare, the New York Times recently followed Oleksii Babenko, a 25-year-old CEO of drone maker Vyriy, who showcased his company’s latest creation. 

In a real-life demo, Babenko rode a motorbike full-pelt as the drone tracked him, free from human control. The reporters watched the scene unfold on a laptop screen. 

The advanced quadrocopter eventually caught him, and in the reporters’ words, “If the drone had been armed with explosives, and if his colleagues hadn’t disengaged the autonomous tracking, Mr. Babenko would have been a goner.” 

Similarly to Ukraine, the Israel-Palestine conflict is proving a hotbed for military AI research.

Experimental AI-embedded or semi-autonomous weapons include remote-controlled quadcopters armed with machine guns and missiles and the “Jaguar,” a semi-autonomous robot used for border patrol.

The Israeli military has also created AI-powered turrets that establish what they term “automated kill-zones” along the Gaza border.

AI weapons
Jaguar’s autonomous nature is given away by its turret and mounted camera.

Perhaps most concerning to human rights observers are Israel’s automated target generation systems. “The Gospel” is designed to identify infrastructure targets, while “Lavender” focuses on generating lists of individual human targets.

Another system, ominously named “Where’s Daddy?“, is reportedly used to track suspected militants when they are with their families.

The left-wing Israeli news outlet +972, reporting from Tel Aviv, admitted that these systems almost certainly led to high civilian casualties.

The path forward

As military AI technology advances, assigning responsibility for mistakes and failures becomes an intractable task – a spiraling moral and ethical void we’ve already entered. 

How can we prevent a future where killing is more automated than human, and accountability is lost in an algorithmic fog?

Current events and rhetoric fail to inspire caution. 

The post Modern battlefields have become a breeding ground for experimental AI weaponry appeared first on DailyAI.