Towards Trusted AI Week 39 – Open AI Red Teaming & The rise of Secure AI Startups

Secure AI Weekly + Trusted AI Blog admin todaySeptember 27, 2023 210

Background
share close

OpenAI Red Teaming Network

OpenAI, September 19, 2023

Finally, OpenAI launched the OpenAI Red Teaming Network, a pivotal initiative designed to bolster the safety and security of our AI models. This venture welcomes experts from a myriad of fields to collaborate, utilizing their diverse insights for the thorough evaluation and enhancement of our AI systems’ resilience. Red teaming, a crucial component of our deployment strategy, has evolved to incorporate the expertise of external specialists in assessing potential risks in emerging systems, as seen in our collaborations on models like DALL·E 2 and GPT-4. The inception of this network signifies our commitment to expanding collaborations and engaging with a wide spectrum of experts, academic institutions, and civil society organizations, complementing external governance measures like third-party audits.

The network serves as a hub for seasoned experts, contributing to a broader risk assessment and mitigation landscape. Members, selected for their unique expertise, will have varying levels of involvement across different stages of model and product development. Beyond OpenAI-initiated campaigns, members will have the opportunity to exchange knowledge on red teaming methodologies, aiming to cultivate a more diverse and iterative approach to AI safety. This aligns with other avenues for collaborative AI safety efforts, including our Researcher Access Program and open-source evaluations. Participation in the network offers a unique opportunity to shape the future of AI technologies and their societal impact, with a focus on diverse expertise from various domains and geographical locations.

Tackling Trust, Risk and Security in AI Models

Gartner, September 5, 2023

Generative AI has sparked widespread interest and experimentation in artificial intelligence, yet the consideration of potential risks is often an afterthought, only addressed after the deployment of AI models and applications. To tackle this, it is imperative to integrate a holistic AI trust, risk, and security management (AI TRiSM) program from the outset. This program is essential in proactively ensuring that AI systems are compliant, fair, reliable, and safeguard data privacy, laying the foundation for responsible AI development and utilization. The four cornerstone elements of AI TRiSM—Explanability/Model Monitoring, ModelOps, AI Application Security, and Privacy—are crucial in managing and mitigating risks associated with AI.

The necessity for AI TRiSM is underscored by six principal risk factors, primarily stemming from a pervasive lack of insight into AI’s inner workings. Firstly, there is a pressing need for clarity and comprehensive understanding of AI’s functionalities, strengths, weaknesses, and potential biases among its users. Secondly, the widespread accessibility of tools like ChatGPT introduces new, evolving risks, especially with cloud-based applications. Thirdly, the integration of third-party AI models risks exposing confidential data, with substantial implications for regulatory compliance, commercial interests, and reputation. Predictions suggest that prioritizing AI transparency and security will lead to enhanced adoption and user acceptance by 2026. Furthermore, the dynamic nature of AI necessitates continuous monitoring and adaptation, with the implementation of specialized, often custom, risk management processes throughout the development and operational lifecycle. The increasing prevalence of adversarial attacks against AI models demands innovative controls and practices to safeguard organizational assets. Lastly, with the advent of regulatory frameworks like the EU AI Act, organizations must be prepared to adhere to emerging compliance standards, extending beyond the current requirements for data privacy.

The integration of AI TRiSM is not just beneficial but essential. Addressing the identified elements of risk is vital for navigating the intricacies of AI implementation, ensuring that the technology we develop is not only advanced but also secure, ethical, and compliant with evolving regulations. By fostering a culture of transparency, continuous monitoring, and adaptability, we can secure the future of AI and maximize its benefits across various domains.

Cybersecurity of Artificial Intelligence in the AI Act

European Commission, September 11, 2023

This report sheds light on the imperative cybersecurity needs of high-risk AI systems, aligning with Article 15 of the European Commission’s proposed AI Act. It offers an elevated analysis amidst the swiftly changing AI environment and lays out fundamental principles to attain compliance with the AI Act. The focus of the AI Act is primarily on AI systems, emphasizing that while AI models are vital components, they alone do not make up the entire AI system. The cybersecurity prerequisites of the AI Act are directed towards the entirety of the AI system rather than its individual elements.

To guarantee adherence to the AI Act, conducting a thorough security risk assessment is indispensable. This involves considering the system’s design to pinpoint risks and put into place essential mitigation strategies. Achieving this necessitates a holistic and ongoing strategy, integrating established cybersecurity methodologies with controls specific to AI. While the current methods to secure AI models have their constraints, compliance with the cybersecurity provisions of the AI Act can still be accomplished. This is possible as long as the cybersecurity risks are proficiently mitigated through various measures, not solely at the AI model level.

Nonetheless, achieving compliance may not always be plausible, especially for certain high-risk AI systems that leverage nascent AI technologies. In such instances, aligning with the cybersecurity standards set by the AI Act could pose significant challenges. In conclusion, navigating the intricacies of cybersecurity for AI systems necessitates a comprehensive understanding and strategic approach to effectively mitigate risks and align with the evolving regulations encapsulated in the AI Act.

AI security startup frenzy

Axios, September 19, 2023

The intersection of the burgeoning startup landscape and the escalating demand for AI security is reshaping both the Silicon Valley and Washington tech scenes. The rush to address the security vulnerabilities inherent in artificial intelligence has paved the way for a surge in entrepreneurship and investment, centered around the ever-expanding domain of AI security. This growing interest in AI security solutions stems from ongoing deliberations amongst AI practitioners and government representatives, who are collectively seeking effective regulatory frameworks for the rapidly proliferating AI technologies.

AI security startups are stepping into the spotlight to address a myriad of potential threats facing AI models. These include the risks of prompt injection, unauthorized access to sensitive corporate data, and commonplace hacking of training data. These emerging companies propose solutions aimed at addressing either specific or a broad spectrum of these issues. However, the journey towards fortifying AI against security threats is in its infancy, as observed by Avivah Litan, a distinguished VP analyst at Gartner. The investment landscape is teeming with opportunities, with AI security startups raising approximately $130.7 million in just the first three quarters of 2023, surpassing the total investment of $122.2 million in 2022.

The surge in public availability of tools like OpenAI’s ChatGPT has accelerated demand for such startups, underscoring the increased awareness and education regarding the risks associated with AI models. The market trajectory for these startups remains uncertain, with possibilities ranging from mergers with larger cybersecurity vendors to the establishment of standalone product verticals. Despite being in the exploratory phase of incorporating AI, enterprises are demonstrating growing interest in AI security solutions, propelling the market towards an estimated worth of $150 million by 2025, according to Gartner. The market is in flux, but the demand is unmistakable and burgeoning.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post