Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Interesting Engineering

    Automated Armageddon: Can AI avoid becoming the next military blunder?

    By Christopher McFadden,

    2 days ago

    https://img.particlenews.com/image.php?url=1ZGJfw_0uWkV7RO00

    ‘Front Lines,’ Christopher McFadden’s’ bi-weekly column, examines warfare past, present, and future. McFadden analyzes cutting-edge military tech, and global defense policies, highlighting the forces shaping our world’s security landscape.

    History is jam-packed with examples of arrogance, hubris, or excessive faith in technology, leading to military disasters. Yet despite the lessons of history, military planners will likely make many more in the future. So, could all the hullaballoo around artificial intelligence (AI) be the next catastrophic miscalculation for them?

    Placing faith in a promising technology like it wouldn’t be the first, and certainly not the last. For instance, after World War I, France embarked on the Maginot Line , an ambitious defense project deemed impervious to German attack.

    However, its designers were so overconfident that they could never imagine the line being breached, especially not completely bypassed by going around through, say, Belgium.

    True to form, that is exactly what the forces of the Third Reich did in May 1940. Other famous examples include Britain’s overconfidence in its sonar during the Battle of the Atlantic and many other examples throughout history.

    This is but one of many examples of overconfidence in a military’s abilities or technology to secure victory on the battlefield. In the Vietnam War, the US military’s overreliance on superior firepower and technology failed against the guerrilla tactics of the Viet Cong, leading to a protracted and costly conflict.

    Similarly, in 1942, Japan’s overconfidence in its naval superiority resulted in a critical defeat at the Battle of Midway, when US forces effectively utilized intelligence and innovative tactics.

    AI is not all that

    AI, or automation, has been part of military technology for many decades. The United States first employed AI in 1991 with DARPA’s Dynamic Analysis and Replanning Tool (DART).

    Designed to help with logistical planning, it proved promising and saved millions of dollars in costs almost instantly. It was so successful that it paid for 30 years of research by 1995. DART also proved pivotal to solving many difficult challenges in certain conflicts, like the Gulf War.

    Things were looking rosy for AI integration in the military machine. However, It had limitations and relied heavily on human oversight to “sense check” its conclusions. While no longer in use, it is the granddaddy of the US’ current automated logistical automation , like the Resource Description and Access System (RDA).

    https://img.particlenews.com/image.php?url=1yZX1F_0uWkV7RO00

    Artist’s impression of a manned warfighter flying alongside drone wingmen. Source: Airbus

    It also, in part, accelerated the uptake in research and integration into AI into many other facets of the US military. Today, AI integration is pretty much everywhere, from fully autonomous weapons systems like drones to automated artillery systems and Intelligence, Surveillance, and Reconnaissance (ISR).

    But, this reliance on AI (automation) would eventually lead to disaster (as you likely predicted).

    Computer says “fire”

    Just two days into the 2003 US-led invasion of Iraq, an automated US Patriot interceptor missile prompted US personnel to fire against what was believed to be an Iraqi anti-radiation missile. However, this was a major mistake.

    The target in question was a British Royal Air Force Tornado fighter bomber. The aircraft and both its crew were killed outright, the first British losses of the conflict and a tragic case of friendly fire.

    The tragedy was caused by various issues, including the Tornado having its “friend or foe” beacon disabled. Still, the overreliance on automation was ultimately deemed the main cause. This accident, in part, is one of the major reasons that the US Armed Forces, in particular, are so keen to keep a human-in-the-loop regarding automated weapons systems.

    It is a sage decision that has been relearned the hard way in recent years. Only last year did a fully autonomous drone apparently decide to bend the rules of engagement for its own benefit.

    Winning at any cost

    The drone was given a set of “win” conditions for its mission and decided, independently, that it could do this if it fired upon and “killed” its human master. The game it was part of gave it points for destroying targets, but it needed a human’s go-ahead.

    So focused was the drone on its goal of getting points that it deemed it justified to kill the operator to enable it to accomplish its mission. Thankfully, this was just a simulation, but it is likely one of many scenarios in which the hubris of human designers or planners could lead to unexpected (and disastrous) outcomes.

    We hope the designers behind such plans are taking a long, hard look at their plans. However the US is not the only nation “playing with fire” regarding AI, especially if AI is given any decision-making over more destructive weapons systems.

    China: The mother of all impending AI screw-ups?

    Moving our attention to the other side of the globe, we see China investing heavily in AI. In most cases, they are developing systems analogous to U.S. ones (imitation is needed for the sincerest flattery), but they are going one stage further.

    In June of this year, we reported that China has built and is training what might be the world’s first AI commander. While they claim it is “caged” and can only make decisions in a lab, this is likely a portend to China’s ultimate ambitions to build live versions.

    AI-generated image of an AI general planning a battle. Source: Dall-E

    Thankfully, at present, the ultimate decision to act kinetically will likely be the sole responsibility of the Communist Party dictate; Chinese hubris could lead to some unforeseen disaster should the AI commander’s conclusions be taken more seriously than they perhaps should.

    This is especially true given the news in January of this year that China is also training AI to “predict” human adversaries’ actions on the battlefield. China is also very noisy about its apparent success rate, knocking out American assets , like carrier groups, during computer simulations.

    Counting their chickens before they’ve hatched, we think. So long as China is just playing with AI for giggles, it is fine, but should they get too overconfident in its abilities, we could be looking down the barrel of an impending AI barrel.

    Only a king shall kill a king

    While AI offers promising advancements in military technology, history teaches us that overreliance on any technology can lead to disaster. Military planners must balance technological innovation with caution and learn, or relearn, that all that glistens is not always gold.

    There is nothing “shinier” today than AI, with every man and his dog playing with it. This is fine for fun things like art or efficiency tools, etc., but we might be playing with fire (figuratively and literally) when it comes to life-or-death decisions. So long as humans maintain oversight and ethical considerations are always considered, we may be able to “dodge the AI bullet.”

    This a sentiment that, for now, at least, senior military officials like the U.S. Deputy Defense Secretary Kathleen Hicks share. “By putting our values first and playing to our strengths, the greatest of which is our people, we’ve taken a responsible approach to AI that will ensure America continues to come out ahead,” she said.

    “Only a king shall kill a king,” or rather, “Only the living may kill the living.”

    But that’s just our two cents. Do you agree? Are you worried about the military’s overreliance or overconfidence in AI? Let us know your thoughts.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular
    Total Apex Sports & Entertainment23 days ago

    Comments / 0