Towards Secure AI Week 31 – New AI Security Standards and Laws

Secure AI Weekly + Trusted AI Blog admin todayAugust 7, 2024 35

Background
share close

Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile

NIST, July 26, 2024

The National Institute of Standards and Technology (NIST) has released the “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile,” a companion to the AI Risk Management Framework (AI RMF 1.0). This framework is designed to help organizations navigate the unique risks associated with generative AI technologies, in alignment with the goals of ensuring safe, secure, and trustworthy AI systems​​.

The Generative AI Profile emphasizes the importance of managing risks across the entire AI lifecycle—from development and deployment to usage and monitoring. It outlines strategies for addressing key areas such as ethics, bias, privacy, trust, and cybersecurity. These measures are vital to mitigating potential negative impacts and ensuring that AI systems operate within legal and regulatory frameworks while maintaining public trust​​.

NIST’s framework is intended for voluntary adoption, providing organizations with a flexible yet structured approach to integrate risk management into their AI practices. The goal is to foster responsible AI development that prioritizes human-centric values and societal well-being​.

World’s first major AI law enters into force — here’s what it means for U.S. tech giants

CNBC, August 1, 2024

The Act, first proposed in 2020 and approved in May by EU member states and the European Commission, aims to mitigate the potential harms associated with AI technologies. It establishes a comprehensive regulatory framework across the EU, targeting not only tech giants like Microsoft, Google, Amazon, Apple, and Meta but also other businesses, regardless of their industry, that utilize AI. The law adopts a risk-based approach, imposing stricter requirements for AI applications considered high-risk, such as autonomous vehicles and medical devices, including thorough risk assessments, quality control measures to prevent bias, and mandatory documentation for compliance checks.

The implications of the AI Act extend beyond the EU’s borders, affecting any company operating within the EU or impacting its citizens, regardless of the firm’s location. This legislation places significant scrutiny on the operations of major tech firms, particularly regarding their handling of EU citizen data and compliance with EU regulations. For instance, Meta has preemptively restricted its AI model availability in Europe due to regulatory uncertainties, underscoring the far-reaching effects of the EU’s stringent data protection laws. As a result, U.S. tech firms face increased regulatory oversight and potential adjustments to their AI strategies to align with the new European standards, ensuring their AI systems are safe, secure, and compliant with the EU’s rigorous legal requirements.

Mapping the misuse of generative AI

Google Deep Mind, August 2, 2024

Generative AI, while capable of producing creative and innovative outputs, can also be used maliciously. For instance, it can generate realistic images, videos, or text that might be used to spread misinformation or create deepfakes. Such capabilities pose significant risks, including the potential to mislead or deceive the public, infringe on privacy, and even disrupt societal stability. To address these concerns, the article emphasizes the importance of developing robust safety measures and ethical frameworks. This includes watermarking AI-generated content, as seen with SynthID, a tool that embeds imperceptible digital watermarks into images. These watermarks help identify AI-generated content and maintain transparency, thereby preventing misuse and ensuring trust in digital media.

Moreover, the article highlights the need for comprehensive evaluation frameworks to identify novel risks associated with AI systems. This involves assessing AI models for dangerous capabilities, such as manipulation or cybersecurity threats, and ensuring alignment with ethical standards. By proactively identifying and mitigating these risks, developers can prevent harmful applications of AI and ensure responsible use of the technology. This approach not only protects individuals and organizations but also fosters public trust in AI innovations.

NIST AISI Releases First Set of Draft Guidance on Dual-Use Models

MeriTalk, July 31, 2024

The National Institute of Standards and Technology (NIST) recently introduced its AI Safety Institute (AISI) and published a draft guidance titled “Managing Misuse Risk for Dual-Use Foundation Models.” This document, available for public comment until September 9, aims to assist AI developers in mitigating risks associated with the potential misuse of AI technologies. The guidelines, crafted in response to President Biden’s October 2023 executive order, detail seven key strategies to prevent AI systems from being used for harmful purposes. These strategies include anticipating misuse risks, managing model theft, and ensuring transparency about potential dangers. The guidance addresses both speculative and immediate threats, such as the use of AI for creating biological weapons or deepfake pornography, emphasizing the need for robust safety measures and ethical considerations.

The AISI’s Director, Elizabeth Kelly, announced upcoming initiatives, including pre-deployment testing of advanced AI models to ensure their safety. This initiative is bolstered by commitments from major AI companies like Google, Microsoft, and Apple. Additionally, AISI plans to establish a global network to promote AI safety, with a significant event scheduled in November in San Francisco. This gathering aims to bring together stakeholders from various sectors, including academia, industry, and civil society, to discuss benchmarks and risk mitigation strategies. These efforts reflect the U.S.’s dedication to maintaining a leadership role in AI safety and innovation, as further evidenced by the Senate’s consideration of the Future of AI Innovation Act, which seeks to formalize the establishment of the AISI.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post