Towards Secure AI Week 18 – NIST’s New Guides Address AI Security Risks

Secure AI Weekly + Trusted AI Blog admin todayMay 8, 2024 68

Background
share close

NIST publishes new guides on AI risk for developers and CISOs

CSOOnline, May 1, 2024

The US National Institute of Standards and Technology (NIST) has released four guides aimed at exploring the risks outlined in its influential 2023 “AI Risk Management Framework” (AI RMF). These guides, essential for AI developers and cybersecurity professionals, focus on cybersecurity concerns such as generative AI risks, malicious training data, and synthetic content risks. They provide recommendations to mitigate these risks effectively.

While not regulatory mandates, these guides set forth principles of good practice that organizations should adhere to in safeguarding their AI systems. However, integrating this knowledge into existing cybersecurity frameworks presents challenges for industry professionals, as highlighted by CEO Kai Roer. With the looming specter of regulatory oversight and the allure of AI-driven automation for malicious actors, proactive measures are crucial to fortify cybersecurity defenses. By prioritizing collaboration and vigilance and adhering to NIST’s guidance, stakeholders can collectively work towards a safer AI landscape.

NIST Unveils Draft Guidance Reports Following Biden’s AI Executive Order

TechPolicy, May 3, 2024

In response to President Biden’s Executive Order on AI security, NIST has introduced four draft guidance reports aimed at enhancing the safety and security of artificial intelligence. These reports, covering topics such as managing risks associated with generative AI and developing global AI standards, mark a significant step in addressing the complexities of AI technologies. Led by Laurie E. Locascio, NIST underscores the importance of these guidelines in supporting innovation while mitigating unique AI risks.

The ongoing public comment period provides stakeholders with an opportunity to contribute feedback, fostering collaborative AI governance. The reports offer technical approaches to reduce risks, such as synthetic content risks, and emphasize the importance of grounding AI standards in rigorous technical foundations. NIST’s efforts aim to provide organizations with the tools and knowledge necessary to navigate the evolving landscape of AI security effectively.

US Homeland Security names AI safety, security advisory board

Reuters, April 27, 2024

To tackle the evolving landscape of AI-related threats to critical infrastructure, the U.S. Department of Homeland Security (DHS) has formed a blue-ribbon advisory board comprising industry leaders from prominent tech firms like OpenAI, Microsoft, Google’s parent company Alphabet, and Nvidia. Led by Homeland Security Secretary Alejandro Mayorkas, the board’s primary focus is to devise practical solutions aimed at safeguarding vital services against potential AI-related disruptions. With transportation, energy, utilities, defense, and finance sectors in mind, the board aims to prevent and prepare for AI-enabled threats that could jeopardize national security and public safety.

The 22-member board brings together a diverse array of expertise, including CEOs from leading tech companies, transportation, and energy sectors, along with government officials. Their forthcoming quarterly meetings underscore a proactive approach to address the challenges posed by AI-assisted cyber attacks and emerging technologies. As DHS sounds the alarm on the risks associated with AI-enabled tools, the advisory board’s formation signals a concerted effort to stay ahead of evolving threats and ensure the safe deployment of AI across critical sectors.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post