Towards Secure AI Week 20 – OpenAI dissolves AI Safety team

Secure AI Weekly + Trusted AI Blog admin todayMay 20, 2024 103

Background
share close

OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it

CNBC, May 17, 2024

OpenAI has disbanded its team focused on the long-term risks of artificial intelligence, a person familiar with the situation confirmed to CNBC. The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup.

OpenAI’s Superalignment team, announced in 2023, has been working to achieve “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.

Sutskever and Leike on Tuesday announced their departures on social media platform X, hours apart, but on Friday, Leike shared more details about why he left the startup.

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

AI cyber security survey – main report

Gov UK

The UK government’s recent survey on AI cybersecurity highlights that many businesses are cautious about adopting AI due to significant security concerns. Among the reasons for this hesitation, 6% of businesses specifically mentioned worries about data protection and confidentiality, underscoring the importance of robust cybersecurity measures. Despite the fact that 68% of businesses are using at least one AI technology, barriers such as a lack of understanding of AI’s applications and its perceived irrelevance to certain sectors persist, limiting broader adoption.

These findings point to the need for improved education and cybersecurity frameworks to address these concerns. By enhancing security measures and raising awareness about the practical applications of AI, businesses can be better equipped to safely and effectively integrate AI technologies into their operations​

SNOWFLAKE AI SECURITY FRAMEWORK

SnowFlake

Snowflake has developed a comprehensive AI Security Framework aimed at mitigating potential risks associated with AI systems. This framework emphasizes the importance of regular security audits, adversarial testing, and transparent model development. By identifying and addressing vulnerabilities, organizations can protect their AI systems from threats that could otherwise lead to severe consequences​​.

Snowflake’s approach includes a multilayered security architecture designed to protect data in transit and at rest, ensuring robust defense against both current and evolving threats. Their platform integrates monitoring, alerts, and governance controls to provide end-to-end data security. This holistic security model not only safeguards AI applications but also helps maintain data integrity and compliance, thereby fostering trust and reliability in AI deployments​.

Alexa, Siri, Google Assistant vulnerable to malicious commands, study reveals

VentureBeat, May 17, 2024

A recent study has revealed that popular voice assistants like Amazon Alexa, Google Assistant, and Apple Siri are susceptible to inaudible malicious commands. Researchers discovered that these AI systems could be manipulated by commands embedded in sounds that are imperceptible to humans, but clearly understood by the devices. These attacks, known as “adversarial audio attacks,” can trick the assistants into performing unintended actions, potentially leading to serious security breaches.

For instance, attackers could use these inaudible commands to control smart devices, make unauthorized calls, or even alter navigation routes in vehicles. The vulnerability arises from the broad range of frequencies that these assistants can process, which includes those beyond human hearing. Although manufacturers implement safety measures, such as voice recognition and limited functionality, the study emphasizes the urgent need for improved security protocols to prevent potential exploitation by malicious actors​.

LLMs’ Data-Control Path Insecurity

Schneier On Security, May 13, 2024

Bruce Schneier’s recent analysis highlights significant vulnerabilities in the data control paths of these models, which could be exploited if not properly secured. The vast amounts of data processed by LLMs present unique challenges, as mishandling user inputs and outputs can lead to the inadvertent exposure of sensitive information. This situation underscores the necessity for robust security protocols and continuous monitoring to protect against potential breaches and misuse.

To address these risks, Schneier advocates for the implementation of rigorous access controls, encryption, and regular audits. He emphasizes that developing secure frameworks for AI operations is essential. Moreover, collaboration between AI developers, cybersecurity experts, and policymakers is vital to keep pace with evolving threats and ensure the safe deployment of AI technologies. These measures are crucial in mitigating the security risks associated with the rapid advancement and integration of AI systems.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post