NIST AI 100-2 E2025 Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

Articles admin todayMarch 31, 2025 42

Background
share close

NIST’s New AML Taxonomy: Key Changes in AI Security Guidelines (2023 vs. 2025)

In an ever-evolving landscape of AI threats and vulnerabilities, staying ahead means staying updated. The National Institute of Standards and Technology (NIST) recently published a crucial update to its cornerstone document, “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations,” (AI 100-2 E2023)

moving from the 2023 edition to the significantly refined 2025 (AI 100-2 E2023 ) release. This article summarizes critical differences, providing strategic insights for CISOs and detailed technical perspectives for security researchers and AI Red Team practitioners.

NIST AI 100-2 E2025 VS E2023 High-Level Changes for CISOs

As AI systems become a core part of enterprise technology stacks, CISOs must remain vigilant to emerging risks. The recent NIST update brings forward substantial improvements, reflecting the rapid advancements and increased threats faced by organizations:

1. Comprehensive Coverage of Attacks

The 2025 NIST report greatly enhances its adversarial ML attack taxonomy, providing expanded definitions and clear categorization. It specifically details advanced generative AI (GenAI) threats, including misuse and prompt injection attacks, clearly delineating between various types of attacks affecting integrity, availability, and privacy—thus enabling clearer risk assessment and mitigation planning.

2. Emphasis on Practical and Operational Impacts

Where the 2023 report primarily discussed theoretical models, the latest edition dives deeper into practical scenarios, explicitly illustrating real-world instances of adversarial attacks. It adds dedicated sections highlighting real-world deployments, typical failures, and successful strategies for managing AI security risks, a crucial improvement as organizations operationalize advanced AI tools.

3. Inclusion of Emerging Threat Vectors and Enterprise Integration

Reflecting current adoption patterns, the 2025 document notably includes explicit guidance on securing AI supply chains, dealing with risks posed by autonomous AI agents, and securing enterprise-grade GenAI integrations through detailed reference architectures. This focus ensures security executives are well-equipped to manage these evolving threats.

NIST AI 100-2 E2025 VS E2023 Detailed Differences for AI Security Researchers and Practitioners

Beyond strategic insights, security experts and red team specialists will appreciate the granular technical evolution in NIST’s adversarial ML taxonomy:

Expanded Attack Categories and Granularity

The taxonomy in the 2023 edition primarily covered three broad attack types (evasion, poisoning, privacy attacks). In contrast, the 2025 taxonomy significantly expands to include clearly defined subcategories such as:

  • Clean-label Poisoning: Attacks that subtly corrupt data without altering labels, thus harder to detect.
  • Indirect Prompt Injection: Sophisticated attacks that exploit external or indirect channels to manipulate GenAI behaviors.
  • Misaligned Outputs (in GenAI): Attacks inducing AI models to produce misleading or harmful outputs despite appearing operationally sound.
  • Energy-latency Attacks: Emerging concerns around resource exhaustion attacks, directly affecting infrastructure-level stability.

Enhanced Real-World Context

The 2025 report intentionally incorporates detailed real-world examples and case studies. Practical case studies include poisoning attacks against deployed financial ML models, privacy breaches from enterprise-grade GenAI chatbots, and operational disruptions through indirect prompt injections. These scenarios significantly improve practical understanding and enable actionable red team testing scenarios.

Stronger Emphasis on Generative AI Security

Acknowledging GenAI’s rapid adoption, NIST’s 2025 edition comprehensively integrates GenAI into its taxonomy, detailing attacks specific to large language models (LLMs), retrieval-augmented generation (RAG) systems, and agent-based AI deployments. Security researchers can now access detailed insights into securing GenAI against increasingly sophisticated adversaries.

Introduction of AI Misuse and Agent Security

A new prominent inclusion is the explicit categorization of Misuse Violations, aimed at capturing security risks arising from attackers exploiting model capabilities to bypass safeguards. Additionally, explicit attention is paid to vulnerabilities within AI Agents, automated AI-driven systems capable of autonomous interactions—an emerging attack vector not covered in the 2023 edition.

Broader Collaboration and Expert Inputs

The 2025 document draws from international collaboration between NIST, the U.S. AI Safety Institute, and the U.K. AI Security Institute, significantly broadening the spectrum of experiences and insights. This international expertise provides an authoritative perspective on global trends and best practices in AI security.

NIST AI 100-2 E2025 VS E2023: Practical Recommendations for Red Teams and CISOs

To practically leverage these insights, we recommend the following immediate actions:

  1. Update Risk Assessments:
    Red teams should immediately integrate the expanded taxonomy to ensure comprehensive testing coverage, particularly in newly defined GenAI-specific attacks.
  2. Real-world Scenario Testing:
    Organizations should emphasize testing realistic scenarios reflecting actual business deployments of AI, guided by detailed use-cases included in the updated NIST guidance.
  3. Secure Enterprise Integration:
    The provided reference architectures and adoption pipelines are invaluable resources for building resilient enterprise AI environments. Using them will help anticipate vulnerabilities before they emerge operationally.

NIST AI 100-2 E2025 VS E2023: Summary

The updated 2025 edition of NIST’s adversarial machine learning guidance is a major leap forward, notably emphasizing real-world scenarios, enterprise deployment risks, and advanced GenAI security concerns. With significantly refined classifications and newly addressed practical threats—such as indirect prompt injection and AI agent vulnerabilities—the document now aligns closely with current operational needs. This evolution provides organizations critical knowledge for staying ahead of adversaries in today’s fast-paced AI landscape.

As threats evolve, your AI red team strategies must evolve alongside. Leveraging this new taxonomy will better equip your team, significantly strengthening your organizational resilience against increasingly sophisticated adversaries.

Written by: admin

Rate it
Previous post