Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems.
NSA, April 15, 2024
The National Security Agency (NSA) has unveiled a comprehensive set of recommendations aimed at bolstering the security of AI systems. Released under the title “Strategies for Implementing AI Systems Securely: Effective Measures for Establishing Secure and Resilient AI Systems,” this initiative is designed to provide support to entities responsible for deploying and managing AI systems. Recognizing the immense potential of AI alongside its susceptibility to misuse, NSA Cybersecurity Director Dave Luber emphasizes the need for proactive cybersecurity measures.
The guidance, developed by the newly established Artificial Intelligence Security Center (AISC), in collaboration with key agencies like the Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI), aims to equip stakeholders with the necessary expertise and insights to mitigate AI-related risks effectively. While primarily tailored for national security contexts, the guidance holds relevance for any organization integrating AI capabilities into their operational framework, particularly those operating in high-risk environments. It expands upon previous directives such as the Guidelines for Secure AI System Development and Engaging with Artificial Intelligence, reinforcing the commitment to enhancing the confidentiality, integrity, and availability of AI systems.
CMU Supports NIST Guidelines on Red Teaming for Generative AI
Carnegie Mellon University, April 11, 2024
Carnegie Mellon University’s Block Center for Technology and Society and K&L Gates Initiative in Ethics and Computational Technologies have jointly released a white paper supporting national efforts to ensure the safety and reliability of AI systems. Following a workshop on red teaming—a method to uncover weaknesses in AI systems—experts from academia and industry gathered to address concerns raised by President Biden’s executive order on AI standards. Led by CMU’s collaboration with NIST, the initiative aims to define and implement red teaming strategies specific to generative AI models, crucial for shaping responsible AI integration in governmental operations.
Hoda Heidari, a Professor at CMU, emphasized the pivotal role of red teaming in addressing safety and trustworthiness concerns regarding AI models. Despite its importance, questions linger regarding its precise definition and regulatory implications. The workshop delved into various aspects of AI red teaming, including research insights and legal considerations. The white paper and event recordings are valuable resources contributing to the discourse on AI security and safety.
Announcing MLCommons AI Safety v0.5 Proof of Concept
MLCommons
The MLCommons™ AI Safety working group introduces a pivotal milestone with the release of its v0.5 proof-of-concept (POC) benchmark, aimed at evaluating the safety of large language models (LLMs). Spearheaded by industry leaders and researchers, this initiative marks a crucial step towards standardizing AI safety assessment methodologies. Led by co-chair Percy Liang from Stanford’s Center for Research on Foundation Models (CRFM), the benchmark invites community experimentation and feedback, paving the way for future enhancements and a comprehensive v1.0 release later this year. Through collaborative efforts, MLCommons aims to establish a universal framework for AI safety evaluation, promoting the reliability and trustworthiness of AI systems across diverse applications and industries.
The v0.5 POC introduces a streamlined benchmarking framework, encompassing hazard tests, reporting platforms, and an evaluation engine inspired by Stanford CRFM’s HELM framework. While initially focusing on text-based language models, future iterations aim to expand the benchmark’s scope to include diverse modalities and use cases. MLCommons underscores the importance of community engagement in refining AI safety standards, fostering a collaborative ecosystem to address emerging challenges and ensure the responsible deployment of AI technologies.
Understanding Red Teaming for Generative AI
CIO Confluence, April 18, 2024
Red teaming has emerged as a vital strategy to fortify AI systems against potential threats, such as the dissemination of sensitive information or the generation of harmful content due to inherent biases. Originating from military exercises, red teaming has evolved to become instrumental in assessing vulnerabilities in computer networks and software, now extending its application to stress-test generative AI models. By deliberately provoking AI systems to produce unintended outputs, red teaming reveals hidden biases and vulnerabilities, fostering a culture of continual improvement and resilience against evolving threats.
In today’s cybersecurity landscape, organizations recognize the importance of red teaming in enhancing their AI security measures. By embracing this strategic approach, businesses can proactively detect vulnerabilities, bolster their cybersecurity posture, and cultivate a culture of vigilance and adaptability in response to emerging threats. Red teaming serves as a crucial tool in advancing AI safety and security, ensuring the reliability and trustworthiness of AI systems in an increasingly complex digital environment.
Rumman Chowdhury on AI red-teaming; a Sisense supply chain attack
CyberScoop, April 18, 2024
In a groundbreaking event at last August’s DEF CON computer security conference, over 2,000 participants engaged in attempts to breach some of the most sophisticated AI models globally. This marked the largest-ever public demonstration of red-teaming on large language models, highlighting the growing significance of this approach in responsibly deploying AI systems. Data scientist Rumman Chowdhury, a key organizer of the Generative AI Red Teaming Challenge at DEF CON, joins host Elias Groll on an episode of Safe Mode to delve into the insights garnered from this event. Additionally, CyberScoop reporter AJ Vicens contributes to the discussion by shedding light on a potentially severe supply chain attack involving Sisense, a prominent business analytics firm.
As policymakers and industry leaders increasingly recognize the critical importance of safeguarding AI systems against potential threats, events like the Generative AI Red Teaming Challenge serve as pivotal platforms for fostering dialogue and sharing best practices. Chowdhury’s insights offer valuable lessons on the importance of proactive measures like red-teaming in identifying vulnerabilities and ensuring the security and integrity of AI systems. Furthermore, the discussion surrounding the supply chain attack on Sisense underscores the ever-present need for robust cybersecurity measures to mitigate risks and protect against potential breaches in the AI ecosystem.
Subscribe for updates
Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.