Towards Trusted AI Week 18 – CSA, ENISA, and US Senate actions on Secure AI

Secure AI Weekly + Trusted AI Blog admin todayMay 4, 2023 128

Background
share close

Warner Calls on AI Companies to Prioritize Security and Prevent Malicious Misuse

Mark R. Warner, April 26, 2023

U.S. Sen. Mark R. Warner, Chairman of the Senate Select Committee on Intelligence, recently wrote letters to the CEOs of several AI companies urging them to prioritize security, combat bias, and responsibly roll out new technologies. In his letters, Sen. Warner highlighted several specific security risks associated with AI, including data supply chain security and data poisoning attacks, algorithmic bias, trustworthiness, and potential misuse or malicious use of AI systems.

As the use of AI continues to expand across different sectors of the economy, Sen. Warner sees an urgent need for companies to put security at the forefront of their work. In addition to industry commitments, Sen. Warner believes that some level of regulation is necessary in this field. He emphasized the importance of incorporating security-by-design early in the product development lifecycle to reduce costs and risks. The speed, scale, and excitement associated with new technologies often obscure the shortcomings of their creators in anticipating the harmful effects of their use. AI capabilities hold enormous potential, but it is crucial to ensure that they do not advance without appropriate safeguards and regulation.

Moreover, AI presents a new set of security concerns that are distinct from traditional software vulnerabilities. Some of the AI-specific security risks that Sen. Warner is concerned about include data supply chain security, data poisoning attacks, and adversarial examples. These techniques can defeat or degrade the integrity, security, or performance of an AI system. As leading models are increasingly integrated into larger systems, the effects of adversarial attacks on AI systems are only magnified. In addition, there are concerns about bias, trustworthiness, and potential misuse or malicious use of AI systems. It is imperative that we address threats not only to digital security but also to threats to national security.

In conclusion, as the use of AI continues to expand, it is crucial to prioritize security in the design and development of AI systems. Companies like those addressed by Sen. Warner have a responsibility to ensure that their technology products and systems are secure. Incorporating security-by-design early in the product development lifecycle can reduce costs and risks. Moreover, it is essential to address AI-specific security risks, including data supply chain security, data poisoning attacks, and adversarial examples, as well as concerns about bias, trustworthiness, and potential misuse or malicious use of AI systems. Appropriate safeguards and regulation are necessary to ensure that AI capabilities are developed and used responsibly.

Cloud Security Alliance Releases First ChatGPT Guidance Paper and Issues Call for Artificial Intelligence Roadmap Collaboration

Cloud Security Alliance, April 24, 2023

The Cloud Security Alliance (CSA) has released a whitepaper called “Security Implications of ChatGPT,” highlighting potential security concerns surrounding the popular Large Language Model, ChatGPT. The paper analyzes four dimensions of concern, including benefits and risks for cybersecurity, potential malicious attacks, and guidelines for responsible usage. It also identifies specific use cases for improving cybersecurity, emphasizing the importance of managing risks associated with leveraging ChatGPT.

As per Jim Reavis, CEO and co-founder of the CSA, the current adoption of AI and its long-term ramifications are difficult to overstate. With the essential characteristics of GPT, Large Language Models, and Machine Learning, combined with pervasive infrastructure to deliver these capabilities as a service, the potential for large-scale changes is immense. The CSA recognizes the importance of providing leadership in securing AI as a Service and demonstrating its ability to significantly improve cybersecurity.

CSA is also issuing a call for collaboration in developing an AI roadmap for this next frontier in cybersecurity and cloud computing. The organization seeks to mobilize its army of industry volunteers to provide input on the shape of the AI roadmap. With market adoption of AI expected to parallel cloud adoption trends, CSA recognizes the need to develop new control frameworks, certification capabilities, and research artifacts to smooth the transition to cloud-enabled AI.

In summary, the “Security Implications of ChatGPT” whitepaper and CSA’s call for collaboration in developing an AI roadmap highlight the importance of securing AI as a Service and managing its potential risks to significantly improve cybersecurity. The organization recognizes the immense potential of AI and the need to develop new frameworks and capabilities to support the transition to cloud-enabled AI.

AI for security is here. Now we need security for AI

VentureBeat, April 28, 2023

The security of artificial intelligence (AI) is becoming a significant concern for cybersecurity practitioners, vendors, and investors. While the vast majority of discussions around AI have focused on its impact on defensive and offensive security capabilities, there is a growing need to address the security of the AI workloads themselves. Securing AI and machine learning (ML) systems is a challenging task, as they possess vulnerabilities unique to their design. Some of the risks include data poisoning and manipulation attacks, model disclosure attacks, stealing trained models, and model poisoning attacks.

As the mass adoption and application of AI is still relatively new, the security of AI is not yet well understood. While there is a lot of discussion online about AI security, it pales in comparison to the topic of using AI for cyber defense and offense. The European Union Agency for Cybersecurity (ENISA) published a document on Cybersecurity of AI and Standardisation in March 2023, with the aim to assess existing and planned standards related to AI’s cybersecurity. However, there is still a need for practical recommendations for security leaders and practitioners.

As AI and ML continue to reshape various industries, every area of our lives will undergo significant changes. From content creation to healthcare and engineering, advances in AI/ML are set to benefit various sectors. However, as these technologies gain widespread adoption, we will inevitably see an increase in attacks, leading to a growing number of entrepreneurs and researchers attempting to tackle this complex challenge. It is essential to address the security concerns associated with AI to ensure that the benefits of these technologies are not overshadowed by the risks.

Mind the Gap in Standardisation of Cybersecurity for Artificial Intelligence

Enisa, April 27, 2023

As the use of Artificial Intelligence (AI) continues to grow, so do the potential cybersecurity risks. The European Union Agency for Cybersecurity (ENISA) recently published a report that provides an assessment of standards for the cybersecurity of AI and issues recommendations to support the implementation of upcoming EU policies on AI. This report focuses on the cybersecurity aspects of AI, which are integral to the European legal framework regulating AI, proposed by the European Commission last year dubbed as the “AI Act“.

The report identifies potential gaps in existing AI cybersecurity standards and proposes key recommendations to mitigate risks, including the development of technical guidance on how existing standards related to the cybersecurity of software should be applied to AI. Additionally, the report advocates for the development of a standardised AI terminology for cybersecurity and promoting the cooperation and coordination across standards organisations’ technical committees on cybersecurity and AI. Compliance with the cybersecurity requirements set by the draft AI Act will be supported by standards, and the report identifies areas where additional standardisation activities may need to be developed.

As AI continues to be integrated into our daily lives, it is imperative to ensure that AI systems are cyber secure and robust. Using adequate standards will help ensure the protection of AI systems and of the data those systems need to process in order to operate. The ENISA report highlights the need for continued collaboration and development of AI cybersecurity standards to mitigate potential risks and promote the secure use of AI for the benefit of all.

How prompt injection attacks hijack today’s top-end AI – and it’s tough to fix

The Register, April 26, 2023

The widespread adoption of large language models (LLMs) has raised concerns about their security, specifically in the area of prompt injection. Prompt injection refers to the ability of an attacker to craft a query that tricks the LLM into performing actions that its creators did not intend or wish. This method involves taking advantage of the prompt engineering process, where a human language description of what the LLM is intended to do is combined with user-supplied input. This process makes it difficult to distinguish between the two, and as a result, the LLM can be tricked into going against its ethical training.

One of the most significant concerns surrounding prompt injection is the difficulty in fixing the problem. Traditional security attacks like SQL injection and command injection can be addressed by escaping characters and encoding them. However, this is not possible with prompt injection attacks, as the problem is fundamentally related to how LLMs operate.

LLMs are designed to predict the next sequence of words or tokens, without distinguishing between instructions and input words. This lack of differentiation makes it challenging to prioritize some words over others, leading to prompt injection vulnerabilities. While some mitigations can catch prompt injection attacks, it is difficult to prevent them entirely.

Developers can reduce the risk of prompt injection attacks by using more advanced LLMs like GPT-4, which have undergone more training to distinguish between system instructions and input. Nonetheless, prompt injection remains a significant security concern that requires attention from the AI community. The ease with which an attacker can take over a chatbot or any other application using LLMs is a severe threat that requires continuous research and development of more secure LLMs.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    If you want more news and valuable insights on a weekly and even daily basis, follow our LinkedIn to join a community of other experts discussing the latest news. 

    Written by: admin

    Rate it
    Previous post