Secure AI in the Military: How Mistakes Turn Deadly

Articles admin todayAugust 18, 2022 1875 1

Background
share close

The first country to fully harness artificial intelligence (AI) for military applications will be the one that leads the world in AI warfare – according to Jack Shanahan, at least. Shanahan is an Air Force Lt. Gen. and director of the United States’ Joint Artificial Intelligence Center, and he’s one of many who believe AI is the future of global military engagements.

Some analysts believe AI will go beyond evolutionary changes to warfare, predicting more of a revolutionary effect. AI has changed and will continue changing how countries approach and engage in battle and defense, but the advancements come at significant risk.

 

Subscribe for the latest LLM Security news: Jailbreaks, Attacks, CISO guides, VC Reviews and more

     

    Artificial Intelligence in Warfare

    AI has transformed tasks as mundane as making grocery lists and as impactful as robotics-assisted medical procedures. Applications are just as versatile in the military.

    Military organizations use AI to rapidly collect and analyze real-time data from satellites, sensors, aerial vehicles, and other surveillance to detect patterns, predict outcomes, and make better decisions. The Department of Defense (DoD) also uses AI as part of its preventive maintenance plan for equipment.

    The other side of the AI spectrum sees autonomous weaponry, self-flying combat planes, and computer-driven decision-making on the battlefield.

    The hope is that AI’s benefits – more accurate civilian detection and reduced need for deploying soldiers into dangerous environments – will outweigh the risk of faulty algorithms, incomplete data sets, and ethical considerations.

    The Data Problem

    AI works off a system of careful calculations and algorithmic input. It needs enormous amounts of data from as many sources as possible, like machine sensors and cameras, if it’s going to work in variable environments.

    When those data sets aren’t enough, or there’s an issue in AI calculations, it has a devastating chain reaction, causing communication failures, enemy and civilian misidentification, faulty autonomous systems, and more.

    Incomplete, incorrect, and inconsistent data render AI-powered weapons, vehicles, and systems useless and often dangerous. Much of the problem comes from the gap between laboratory-built data sets created in controlled environments and the unpredictable, ever-changing conditions in real-world combat environments.

    Lab-collected data for AI training and development is organized, complete, fact-checked, and consistent. The real world is not so simple. Even the most advanced and complex AI algorithms can never fully prepare for far more complex scenarios in reality, largely due to the concept of “data drift” – when accurate data is rendered inaccurate because of “drifting” characteristics in an environment.

    Examples include:

    • Wartime changes to the physical environment, like accidental or unexpected explosions.
    • AI sensors and critical inputs become blocked by smoke, dust, and contaminants.
    • Concealed or camouflaged interference that obscures reality, creating inaccurate data.
    • Sensor destruction from impact or environmental wear.
    • Previously unknown tactics the AI could not have known to prepare for, throwing a wrench in its entire algorithmic process.

    Such variable data is even dangerous in non-autonomous applications, including human-assisted machines. AI has what scientists refer to as a “black box” – a name for its mysterious deep learning neural network that we don’t fully understand. We know how AI makes its decisions, but in some applications, we have very little understanding of why.

    Without a full understanding of such a complex system, how can soldiers in the heat of battle realistically rely on machine understanding for life and death decisions? They’re often left with two options: trust AI recommendations without fully understanding them or taking critical combat seconds to question or reconsider the information they’re presented with.

    A lot of decisions seem straightforward in theory, but at the end of the day, it’s human soldiers responsible for upholding international law, limiting civilian harm, and making decisions that control human life.

    What Could Go Wrong?

    Military organizations are only at the tip of what will be a near-endless iceberg of AI possibilities, but numerous examples already give us valuable insights into what could go wrong.

    Unreliable AI

    Incomplete and inaccurate data sets inevitably lead to unreliable AI. One of the best examples comes from the International Conference on Machine Learning (ICML) in 2018.

    The group 3D printed a turtle, then demonstrated various ways AI algorithms incorrectly interpreted that turtle. Results ranged from accurate detection to systems classifying the turtle as a rifle.

    There’s no calculation to teach AI what a human is, so there’s no 100% reliable way to prevent these systems from misidentifying civilians or allies during combat.

    Fooling the Algorithm

    Algorithms can also be tricked or confused by external interference. During a live demonstration in 2020, one military sensor designed to distinguish between military and civilian vehicles incorrectly marked a walking human and a nearby tree as the same type of target.

    This is especially risky with autonomous and semi-autonomous vehicles, a common project for many military groups worldwide. Aircraft, drones, naval vessels, and standard ground vehicles are susceptible to a range of faulty input.

    One research group at the University of Michigan demonstrated how LiDAR-based perception systems – the integral sensor system for many of these designs – can be easily tricked into hallucinating false or nonexistent obstacles. All it takes is an understanding of how light interacts with the system and careful timing to disrupt distance-based calculations and input false data.

    Deepfakes

    If AI is intelligent enough to mimic human understanding, it’s intelligent enough to create false information that’s believable enough to fool trained experts. “Deepfake” refers to AI-created images, videos, or audio files that are nearly indiscernible from the real thing.

    Countries can generate deepfakes to generate false propaganda, blackmail officials, fool top military decision-makers, and create a general sense of distrust among a government and its constituents.

    As deepfakes become real enough to trick top-of-the-line industry forensic tools, the U.S. DoD’s Defense Advanced Research Projects Agency (DARPA) has launched Media Forensics (MediFor), a project to help U.S. military organizations track and keep pace with AI algorithms.

    Adversarial Attacks

    Adversarial attacks are any object, sound, image, or video altered just slightly enough to bypass AI algorithms and even manipulate the human decision-maker on the other end. These attacks are nearly impossible to predict, and even when we have advanced warning of one, black-box AI often leaves us questioning what the algorithm’s next move will be in response.

    Enemy groups can target AI-powered systems by blocking critical sensors, interrupting communication between operators, or changing data and object sources via “input attacks” to intentionally disrupt or destroy the other side’s operation.

    AI algorithms and computer-powered machines are also susceptible to spoofing attacks and hacking, where they’re fed false information to purposely derail computer calculations or inspire an adverse response the system couldn’t possibly predict.

    Examples of AI in the Military

    Global leaders are aware of the dangers presented by the military use of AI, though they still continue its development due to pressures from rival nations. However, they have not completely forgone safety concerns. In the United States, governments are taking measures to develop the safe use of AI, including a series of ethical principles to follow; and in the UK, the launch of the Defense Center for AI Research (DCAR) includes a focus in part on AI ethics and safety.

    But even as safety is being addressed, AI developments in the military continue to appear. As a defensive tool, Phalanx – a computer-controlled gun system on U.S. Navy ships that detects and evaluates incoming missiles – proves AI’s military effectiveness. However, many examples of AI in the military lean more toward the offensive side of battle.

    Air Combat

    In March 2003, days after the U.S. began its invasion of Iraq, American troops followed computer-powered weapon recommendations to fire at an oncoming missile in what they believed to be a defensive attack.

    The missile system’s autonomous features were tragically incorrect and the U.S. inadvertently struck down a U.K. Tornado ZG710 fighter jet, killing three pilots. In this case, the inadvertent friendly fire resulted from a combination of factors on both sides, but it has left many analysts wondering about the very nature of introducing AI autonomy into aerial combat.

    Military groups continue working to evolve and improve targeting, but it’s impossible to deny the possible consequences of combining machine speeds and lethal force with unpredictable systems prone to miscalculations.

    Autonomous Weapons

    Autonomous weapons are at the heart of many ethical conversations surrounding AI in the military. Countries and leaders must consider unprecedented questions – such as the appropriate balance of human interference in machine-driven attacks – to create policies that ultimately protect human life more than endanger it.

    Lethal autonomous weapon systems (LAWS) use sensors and AI algorithms to identify, engage and destroy targets without manual human control. The practical effects could change the course of combat in areas where communication isn’t possible. For example, military drones could fly to inaccessible locations to target enemy soldiers without any human intervention needed.

    The risks here are evident, which is the primary reason LAWS aren’t yet widespread. That said, there are currently no legal prohibitions against developing LAWS in the United States. A United Nations report released in 2020 made waves when it told of LAWS’ first known debut – a drone actively engaged in Libya’s military conflicts. Though the report didn’t say the LAWS killed anyone, it does represent the beginning of what could prove a perilous chapter in militant AI.

    South Korea has recently developed a robot – Super aEgis II – that’s equipped with a machine gun that automatically detects and fires at human targets as far as a mile and a half away. The armed robots are currently in testing along the demilitarized part of the country’s North Korean border, but reports show individual units being sold to other world governments.

    In Israel, the Defense Ministry recently unveiled a medium robotic combat vehicle (MRCV), a fully automated tank. Using AI technology, the tank can detect and respond to incoming threats, control drones and transport unmanned aerial vehicles. While this tech can certainly save lives — after all, if it’s destroyed, only money is lost and not human life — concerns have also been raised about its security. Should the automated tank be hacked by an enemy, it could do even worse damage to friendly forces.

    Battlefield Armor

    Russia’s Armata T-14 tank, the country’s primary focus for battlefield armor, will eventually replace their current tank arsenal with units designed with a lower profile for better discretion and automatic guns that find and track targets for human operators to approve or decline the aligned shot.

    The Armata achieves its lower profile by eliminating the need for inhabited turrets, similar to the United States military’s heavy Stryker carrier. Instead of being stationed beside the tank’s weapon, human operators are safely concealed inside the vehicle.

    While this is an incredible advancement for safeguarding soldiers, it’s another example of trusting AI to accurately detect and label potential targets and puts a lot of pressure – perhaps an unrealistic amount – on human operators responsible for approving or denying the machine’s selected target during battle.

    The Future of AI in the Military

    At this point, it isn’t possible to ignore or refuse to use AI in military applications. If only one country commits to experimenting with computer-based automation, the rest of the world must follow suit if it hopes to protect its soldiers and civilians.

    However, safety is paramount, and it is crucial to ensure that militaries pay more attention to the safe use of AI. Should safeties fail, AI could target allies and civilians, seed distrust, attack nonexistent obstalces, and worse. Until we can better ensure AI safety, we cannot rely entirely upon it.

    If military organizations continue developing and advancing AI systems, governments must enforce a system of ethical principles – such as the Defense Innovation Board’s five principles of AI ethics – and maintain a balance between machine and human decision-making by focusing on semi-autonomy or partial-autonomy rather than full AI autonomy. Besides, organizations should learn about AI Red Teaming.

     

    This article is written by Zac Amos

    Zac writes about AI and cybersecurity for the online tech magazine ReHack, where he is the Features Editor. For more of his work, follow him on Twitter or LinkedIn.

     

    LLM Red Teaming Platform

    Are you sure your models are secured?

    Let's try!
    Background

     

    Subscribe for updates

    Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

      Written by: admin

      Rate it
      Previous post