Towards Secure AI Week 44 – From Open-Source AI Risks to National Policies

Secure AI Weekly + Trusted AI Blog admin todayNovember 6, 2024 73

Background
share close

Researchers Uncover Vulnerabilities in Open-Source AI and ML Models

The Hacker News, October 29, 2024

Recent disclosures have highlighted over thirty security vulnerabilities within various open-source artificial intelligence (AI) and machine learning (ML) models, some of which could allow for remote code execution and unauthorized data access. Key flaws have been identified in tools like ChuanhuChatGPT, Lunary, and LocalAI, reported through Protect AI’s Huntr bug bounty program. Among the most critical issues are two significant vulnerabilities in Lunary: one that could enable an authenticated user to manipulate data of other users (CVE-2024-7474) and another that allows unauthorized access to sensitive information (CVE-2024-7475). Additionally, a path traversal flaw in ChuanhuChatGPT (CVE-2024-5982) and several vulnerabilities in LocalAI further emphasize the urgent need for robust security measures in AI systems.

As AI technologies evolve, prioritizing security and safety is essential to protect against potential exploits and ensure responsible usage.

Inside Google Cloud’s secure AI framework

Computer Weekly, October 28, 2024

In response to growing concerns regarding artificial intelligence (AI) security, Google Cloud has developed a secure AI framework that leverages its established security practices to help businesses manage the evolving risks associated with AI deployments. Phil Venables, Google Cloud’s chief information security officer (CISO), recently emphasized the framework’s focus on three key areas: software lifecycle risk, data governance, and operational risk. By controlling the entire AI stack—from hardware to data—Google can implement robust security measures from the ground up. However, Venables noted that it’s crucial to empower customers to manage AI securely within their environments, not just rely on strong foundational infrastructure.

The secure AI framework addresses software lifecycle risks by providing integrated tools within Vertex AI to streamline the AI development process, allowing for better management of model weights and parameters. Data governance is enhanced through features that track data lineage and ensure integrity while maintaining a clear separation between customer data and Google’s foundational models. To mitigate operational risks, tools like Model Armor help filter inputs and outputs, protecting against threats like prompt injection. As organizations transition from prototypes to production, many find the framework valuable for establishing effective risk management processes. Additionally, Google Cloud promotes industry collaboration by open-sourcing the framework and developing transparency tools like “data cards” and “model cards,” reflecting its commitment to enhancing AI security across various sectors.

The Biden Administration’s National Security Memorandum on AI Explained

CSIS, October 25, 2024

On October 24, 2024, the Biden administration unveiled a pivotal National Security Memorandum (NSM) focused on advancing U.S. leadership in artificial intelligence (AI), ensuring its safety, and securing its use in national security operations. This document, which stems from the October 2023 AI Executive Order, is a comprehensive strategy for addressing the rapidly evolving AI landscape, particularly the emerging frontier AI models like ChatGPT, Anthropic’s Claude, and Google’s Gemini. These models, which represent a leap from earlier deep learning systems, are now central to both civilian and military applications. The NSM outlines the government’s approach to managing risks associated with these advanced AI systems, emphasizing their growing importance in national security. Unlike previous policies focused on deep learning models, the NSM highlights the unique capabilities of frontier models, which are general-purpose and capable of performing across a wide range of tasks, making them crucial for national security.

The memorandum sets forth key objectives to secure AI’s role in U.S. defense and intelligence efforts. These include maintaining U.S. leadership in AI development by attracting top talent, building critical infrastructure, and enhancing counterintelligence efforts to protect AI technologies from adversaries. The NSM also stresses the acceleration of AI adoption across national security agencies, ensuring that the U.S. government can harness the potential of these technologies effectively. It directs federal agencies to reform hiring and contracting practices to incorporate AI expertise and streamline the integration of AI into defense and intelligence operations. Furthermore, the document underscores the need for collaboration with allies, clarifying the strategic importance of maintaining global leadership in frontier AI while fostering secure and trustworthy AI systems. By addressing the security and safety of AI, the NSM reflects the administration’s recognition of AI as a transformative technology that, if properly managed, can bolster national security while mitigating potential risks.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post