Towards Secure AI Week 45 – AI Safety Through Testing, Legislation, and Talent Building

Secure AI Weekly + Trusted AI Blog admin todayNovember 12, 2024 64

Background
share close

Microsoft’s Yonatan Zunger on Red Teaming Generative AI

The Cyber Wire, November 6, 2024

In a recent Microsoft Threat Intelligence Podcast episode, host Sherrod DeGrippo speaks with Yonatan Zunger, Corporate Vice President of AI Safety and Security at Microsoft, to explore the critical importance of securing AI systems. The conversation centers on Microsoft’s AI Red Team, dedicated to finding and mitigating vulnerabilities within AI technologies. Zunger explains how this specialized team employs advanced threat simulations to identify potential security risks, assess the resilience of AI systems, and implement safeguards. Their approach combines technical expertise and innovative methods, emphasizing the importance of rigorous testing to protect AI products in an evolving threat landscape.

This discussion offers deep insights into AI security challenges, focusing on understanding the unique characteristics—or “psychology”—of AI, which is crucial for developing effective defenses. Zunger also highlights how training and technical protections can help prevent risks and how financial incentives can drive performance improvements in AI systems. Key questions addressed in the episode include exploring how Retrieval-Augmented Generation (RAG) functions, assessing risks related to data access and permissions, and whether accuracy-linked rewards can enhance the reliability of AI responses.

UK will legislate against AI risks in next year, pledges Kyle

Financial Times, November, 2024

The UK is set to introduce AI-focused legislation in 2024, aiming to move from voluntary AI safety agreements to a legally binding framework. Announced by Peter Kyle, Secretary of State for Science, Innovation, and Technology, the upcoming bill will establish clear regulations for AI developers and transition the AI Safety Institute into an independent entity. The goal is to address public concerns by ensuring robust risk management as advanced AI models are developed and deployed.

Additionally, the legislation will emphasize secure infrastructure, supporting both sovereign AI models and broader computing needs. Given that full funding cannot come from the government alone, private sector collaboration will be vital to establish the estimated £100 billion in infrastructure. Complementing these plans, a new AI assurance platform will offer businesses practical tools to assess AI-related risks, conduct impact assessments, and minimize bias in AI applications.

World needs to focus more on AI security and safety, says US science envoy

ChanelNewsAsia, November 8, 2024

The world’s focus on AI has been primarily on data model training and regulatory creation, but there is a significant gap in testing AI systems for security and enforceable laws, according to Dr. Rumman Chowdhury, the U.S.’s first AI science envoy. Effective AI deployment requires rigorous testing and enforceable accountability to prevent risks. She highlighted issues like bias in AI data, which often reflects Western perspectives, and the importance of region-specific solutions for applications like agriculture.

Chowdhury also noted that talent shortages hinder AI’s growth, especially in fields like finance, where rapid AI advancement poses challenges. To address this, she advocates for collaboration across public, private, and educational sectors to build a robust AI talent pool. Finally, she emphasized creating educational programs that blend ethics with technical training to support AI’s safe and equitable development.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post