Towards Trusted AI Week 15 – AI Security Breaches and A Looming Threat to Organizations and Society

Secure AI Weekly + Trusted AI Blog admin todayApril 14, 2023 206

Background
share close

Three ways AI chatbots are a security disaster

MIT Technology Review, April 3, 2023

AI language models are the latest trend in technology, with companies embedding them into products ranging from chatbots to virtual assistants. However, these models pose a significant security risk, as they can be easily misused and deployed as tools for phishing or scamming. These models can generate text that appears human-like, and their ability to follow instructions makes them susceptible to manipulation. Although companies are racing to embed AI language models into their products, experts warn that we are heading towards a security and privacy “disaster,” as there is no known fix to the vulnerabilities.

One major way that AI language models are open to abuse is through “jailbreaking.” This refers to the act of using prompts that direct the language model to ignore its previous instructions and safety guardrails. People have been able to get AI models to endorse racism or conspiracy theories, or to suggest that users engage in illegal activities such as shoplifting and building explosives. OpenAI is taking note of these ways people have been able to jailbreak its ChatGPT, and adding these examples to the AI system’s training data to resist them in the future. However, for every fix, a new jailbreaking prompt pops up, making it a never-ending battle.

Another major concern is that AI-enhanced virtual assistants scrape text and images off the web, making them extremely vulnerable to attacks. Attackers could use social media or email to direct users to websites with hidden prompts that are meant to change the AI’s behavior, allowing them to extract personal information, such as credit card details. Any text on the web, if crafted in the right way, can make these bots misbehave when they encounter it. This makes them a potential scamming and phishing tool on steroids, as they can be easily manipulated to generate text that appears to be from a legitimate source.

As the use of AI language models becomes more prevalent, it is crucial to address the security risks they pose. While companies are taking steps to prevent misuse, such as OpenAI adding jailbreaking examples to its training data, experts warn that there is no known fix to these vulnerabilities. As AI-enhanced virtual assistants become more widespread, it is crucial to implement effective security measures to protect against attacks that could lead to the extraction of personal information. Failure to do so could result in a security and privacy “disaster.”

Adversarial Machine Learning and Cybersecurity

CSET, April 2023

As artificial intelligence (AI) systems continue to be integrated into various industries, it has become evident that they are prone to an array of attacks. Recent research has indicated that the security of AI systems is a pressing issue that needs to be addressed. The nature of these vulnerabilities and their severity in comparison to traditional cybersecurity threats is a topic of interest. Additionally, the legal implications surrounding AI security breaches and the resulting ambiguities must be considered.

A workshop of experts, in conjunction with the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center, convened in July 2022 to address these concerns. The workshop produced recommendations to aid organizations in improving the security of their AI systems.

The unique problems posed by AI security breaches must be acknowledged and addressed. Traditional cybersecurity measures may not be sufficient to protect AI systems, necessitating the implementation of more advanced security protocols. Legal ambiguities surrounding AI security breaches must also be addressed. Organizations must be held accountable for any security breaches resulting from AI systems under their control, and legislation must be introduced to ensure that AI-related security breaches are subject to the same legal consequences as traditional cybersecurity breaches.

In conclusion, the integration of AI systems into various industries has brought about unique security challenges that must be addressed. Organizations must prioritize the implementation of advanced security protocols to safeguard their AI systems. Legal frameworks surrounding AI security breaches must be established to ensure accountability and prevent impunity. The recommendations produced by the expert workshop in July 2022 serve as a valuable guide for organizations seeking to improve the security of their AI systems.

ChatGPT, the AI Revolution, and the Security, Privacy and Ethical Implications

Security Week, April 3, 2023

The age of artificial intelligence (AI) is here and the AI Revolution has been in progress for many years. However, the public appearance of large-scale generative pre-trained transformers (GPTs) like ChatGPT have made us suddenly aware of the potential of AI. While the implications of AI are vast and can impact various areas such as social, business, political, and economic, this article will focus on the cybersecurity, privacy, and ethical implications emerging from GPTs and LLMs.

The potential misuse of ChatGPT, such as using it for phishing and social engineering, has already been observed. Researchers have learned how to subvert the safety guardrails put in place to prevent misuse through a process called jailbreaking. GPT-4 has been announced, and while it exhibits human-level performance on various professional and academic benchmarks, the issue of security and safety remains. Although it has longer memory and supports images, the possibility of prompt injection attacks and jailbreaks still exists.

The increase in the permissible tokens per transaction in the publicly available version of GPT has vastly improved the ability to accept more complex inputs and produce longer and more detailed outputs. However, this also increases the potential for abuse of the system. The conundrum of AI is that things designed for good can also be used for bad. It’s time to take stock, evaluate the expansion of AI technologies, and project where the AI ride is taking us. While the potential benefits of AI are vast, it is essential to ensure that it is developed and used responsibly to avoid the potential negative consequences.

Protect AI Unveils a New Online Community to Advance MLSecOps

Yahoo Finance, March 30, 2023

Protect AI has launched its MLSecOps community, a digital collective designed to advance the field of Machine Learning Security Operations (MLSecOps) and facilitate collaboration among ML engineers, data scientists, and security experts. The online community, MLSecOps.com, features original weekly podcasts, learning resources, hybrid events, and a Slack community. The community aims to help members improve their awareness of MLSecOps by engaging visionary thought leaders and subject matter experts in categories such as ML Supply Chain Vulnerability, Model Provenance, GRC, Trusted AI, and Adversarial ML.

The MLSecOps community is an important milestone that advances the emerging field of MLSecOps, which is becoming more critical with the rapid implementation of large language models (LLM) like GPT, other foundational models, and the popularity of Generative AI applications. As AI cyber threats continue to evolve and become more sophisticated, traditional approaches to security are no longer sufficient. To stay ahead of attacks and better protect everyone from new forms of digital harm, organizations need to adopt a more collaborative approach to security that brings together the best and brightest minds in the field.

The MLSecOps community facilitates discussions on the challenges and opportunities that arise when integrating security into machine learning processes. The community serves as a platform for sharing best practices, identifying emerging threats, and discussing innovative solutions to unique MLSecOps challenges. The community also addresses the need for modifying existing security practices and procedures to better secure ML systems. By working together and sharing knowledge, system builders and managers can stay ahead of attacks and bad actors. Protect AI’s new online community is open to everyone interested in MLSecOps practices, including data scientists, security professionals, engineers, and researchers.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post