Towards Trusted AI Week 25 – Nvidia and WEF Updates and Strategies for Securing AI

Secure AI Weekly + Trusted AI Blog admin todayJune 21, 2023 130

Background
share close

AI Governance Alliance

World Economic Forum

In a groundbreaking move, the World Economic Forum has taken a significant step towards safeguarding the security and safety of artificial intelligence (AI) systems. The launch of the AI Governance Alliance brings together key stakeholders from various sectors, including industry leaders, governments, academic institutions, and civil society organizations. This pioneering multi-stakeholder initiative is dedicated to promoting responsible global design and transparent deployment of AI systems that are inclusive and accountable.

The AI Governance Alliance aims to shape the future of AI governance by fostering innovation and ensuring that the immense potential of AI is harnessed for the betterment of society. It recognizes the importance of upholding ethical considerations and inclusivity at every stage of AI development and implementation. By joining forces, these diverse stakeholders are committed to championing responsible practices that prioritize the security, safety, and ethical implications of AI technologies.

With the launch of the AI Governance Alliance, the World Economic Forum demonstrates its dedication to addressing the evolving challenges and risks associated with AI. By creating a collaborative platform for dialogue and cooperation, this initiative will help establish global standards and guidelines that promote the responsible and beneficial use of AI, ultimately enhancing the security, safety, and trustworthiness of AI systems worldwide.

Google releases plan to protect you from AI threats

Mashable, June 8, 2023

With the growing adoption of generative AI by organizations, Google is emphasizing the critical importance of security. In pursuit of this goal, the tech giant recently unveiled the Secure AI Framework (SAIF), which serves as a guiding security roadmap. While the framework is still in its early stages, it aims to address security concerns in the realm of AI applications.

It is important to note that the SAIF primarily focuses on immediate security risks rather than delving into the existential AI perils that Elon Musk often discusses. The framework consists of six core elements that organizations should consider. The initial two elements involve expanding an organization’s existing security framework to incorporate AI threats. The third element emphasizes the integration of AI in defending against AI threats, drawing parallels to a potential AI arms race. The fourth element highlights the security benefits of uniformity in AI-related control frameworks. Lastly, elements five and six underscore the need for continuous inspection, evaluation, and robust testing of AI applications to ensure their resilience and minimize risks.

While Google emphasizes the importance of fundamental cybersecurity practices around AI, unique security challenges are already emerging in generative AI applications like ChatGPT. One such concern is the concept of “prompt injections,” a peculiar form of AI exploitation where malicious commands lie dormant within text blocks, altering the AI’s response when detected. This manipulation resembles concealing a sinister mind-control spell within text, creating an unsettling scenario. Prompt injections represent just one of the new threats that Google aims to mitigate through its framework, along with model theft, data poisoning, and the extraction of sensitive training data.

Although the SAIF framework initially caters to Google’s internal practices, its broader impact remains uncertain. The release of a framework has the potential to become an industry standard, akin to the National Institute of Standards and Technology’s (NIST) cybersecurity framework for protecting critical infrastructure. However, Google’s framework may not carry the same authority as that of the US government, raising questions about its reception among AI rivals like OpenAI. Nevertheless, Google’s proactive approach to AI security demonstrates its commitment to leading the way in the AI space and regaining credibility lost during earlier phases of the AI race.

NVIDIA AI Red Team: An Introduction

NVidia, June 14, 2023

Artificial Intelligence (AI) and machine learning have the potential to revolutionize our world, bringing about remarkable advancements. However, as with any transformative technology, there are inherent risks that must be addressed. With capabilities once limited to science fiction now becoming increasingly accessible, it is crucial to prioritize the responsible use and development of AI systems. This requires comprehensive categorization, assessment, and mitigation of risks, both from a pure AI standpoint and through the lens of information security.

To safeguard the security and safety of AI, organizations are employing red teams to proactively explore and identify immediate risks associated with these systems. NVIDIA’s AI red team philosophy offers valuable insights into this approach, emphasizing the collaboration between offensive security professionals and data scientists. By leveraging their combined expertise, these cross-functional teams can assess ML systems from an information security perspective, enabling the identification and mitigation of potential risks.

Implementing a robust assessment framework is essential to guide these efforts effectively. Such a framework should encompass the organization’s specific concerns, define assessment activities, tactics, and procedures, and clearly delineate the scope of ML systems under evaluation. Furthermore, it should provide stakeholders with a comprehensive overview of the security landscape of ML systems, setting realistic expectations for assessment outcomes and addressing potential risks. This framework serves as the basis for a functional ML security program, where red teaming plays a vital role alongside other security measures.

In conclusion, the security and safety of AI systems should be a top priority as the technology continues to evolve. By adopting a holistic approach that combines AI expertise with information security principles, organizations can identify, assess, and mitigate risks associated with ML systems effectively. Establishing comprehensive frameworks and methodologies will enable proactive measures to ensure the responsible use and development of AI, paving the way for a secure and trustworthy AI-powered future.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post