Towards Secure AI Week 12 – New NIST AI Security Efforts

Secure AI Weekly + Trusted AI Blog admin todayApril 2, 2025 8

Background
share close

NIST AI 100-2 E2025. Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

NIST, March, 2025

The National Institute of Standards and Technology (NIST) has released a report titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations” (NIST AI 100-2 E2025). The report categorizes AML threats based on the type of machine learning used, the stage at which attacks occur, and the objectives of adversaries, helping organizations identify vulnerabilities and strengthen AI defenses.

One of the critical challenges in securing AI systems is ensuring data integrity, model robustness, and continuous monitoring. Attackers can manipulate training data, introduce deceptive inputs, or exploit weaknesses in model design, causing AI systems to malfunction or make biased decisions. To counteract these threats, NIST outlines several mitigation strategies, including data sanitization, adversarial training, and rigorous evaluation metrics. By exposing models to adversarial examples during training and employing comprehensive testing protocols, AI systems can be made more resilient against manipulation. Additionally, real-time monitoring of AI operations can help detect and respond to threats before they cause significant damage.

A key contribution of the NIST report is the establishment of a standardized vocabulary for adversarial machine learning, ensuring clear communication among researchers, developers, and policymakers. By defining common terms and attack categories, the report fosters collaboration in securing AI systems against evolving threats. As AI continues to advance, organizations must prioritize security by adopting NIST’s recommendations, implementing robust defense mechanisms, and staying vigilant against adversarial attacks. Strengthening AI security is essential not only for maintaining trust in technology but also for safeguarding critical applications in fields like healthcare, finance, and national security.

North Korea launches new unit with a focus on AI hacking, per report

TechCrunch, March 20, 2025

North Korea is reportedly enhancing its cyber warfare capabilities by establishing a new unit within its intelligence agency, the Reconnaissance General Bureau (RGB). This unit, known as Research Center 227, is tasked with developing offensive hacking technologies and programs aimed at infiltrating Western cybersecurity systems. Its objectives include researching computer networks, stealing digital assets, and advancing AI-based techniques for information theft. ​

The formation of Research Center 227 underscores North Korea’s strategic focus on cyber operations as a means to circumvent international sanctions and fund its military ambitions. The RGB has a history of orchestrating significant cyberattacks, notably through groups like Lazarus, which has been implicated in large-scale cryptocurrency thefts. For instance, Lazarus reportedly stole over £1.2 billion in Ethereum from the cryptocurrency exchange Bybit, contributing to North Korea becoming the third-largest holder of bitcoin globally. ​

The establishment of this AI-focused hacking unit raises concerns about the security and safety of AI systems worldwide. As AI technologies become increasingly integrated into critical infrastructure, the potential for sophisticated cyberattacks grows. North Korea’s investment in AI-driven hacking capabilities highlights the urgent need for robust cybersecurity measures to protect sensitive information and maintain the integrity of AI systems against emerging threats.

Project OWASP Promotes GenAI Security Project to Flagship Status

Yahoo Finance, March 27, 2025

The Open Worldwide Application Security Project (OWASP) has elevated its Generative AI Security Project to flagship status, underscoring the critical importance of addressing security concerns in generative artificial intelligence (GenAI) systems. This global, open-source initiative is dedicated to identifying, mitigating, and documenting security and safety issues associated with GenAI technologies. ​

The OWASP GenAI Security Project serves as a collaborative platform for experts and organizations to share knowledge, tools, and best practices aimed at enhancing the security of GenAI applications. By promoting awareness and providing resources, the project seeks to equip developers and security professionals with the necessary means to safeguard GenAI systems against emerging threats. ​

As GenAI technologies become increasingly integrated into various sectors, ensuring their security and safety is paramount. The OWASP GenAI Security Project’s elevation to flagship status reflects the growing recognition of these challenges and the need for concerted efforts to address them. Through collaboration and shared expertise, the project aims to foster a more secure environment for the development and deployment of generative AI systems.

Review: The Developer’s Playbook for Large Language Model Security

HelpNet Security, March 25, 2025

Steve Wilson, Chief Product Officer at Exabeam and a seasoned expert in AI, cybersecurity, and cloud computing, has authored The Developer’s Playbook for Large Language Model Security. This comprehensive guide offers security professionals practical strategies to safeguard AI-driven applications. ​

Wilson’s playbook delves into critical security concerns associated with LLMs, such as prompt injection attacks, adversarial data poisoning, and supply chain vulnerabilities. It provides actionable mitigation techniques, including runtime safeguards and input validation, to fortify LLM-based systems. A notable feature of the book is its alignment with the OWASP Top 10 for LLM Applications project, which Wilson leads. This connection ensures that the identified security risks and mitigation strategies are vetted by a global network of experts, enhancing their applicability in real-world scenarios. ​

Designed to serve both as an introductory resource and a reference guide, The Developer’s Playbook for Large Language Model Security is essential reading for security professionals tasked with protecting AI applications. Its technical depth, practical strategies, and real-world case studies equip readers with the knowledge to navigate the evolving landscape of AI security effectively.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post