Towards Trusted AI Week 37 – Why AI TRiSM is Essential

Secure AI Weekly + Trusted AI Blog admin todaySeptember 14, 2023 120

Background
share close

Tackling Trust, Risk and Security in AI Models

Gartner, September 5, 2023

The surge of interest in generative AI technologies has led to a plethora of pilot projects, but what often falls by the wayside is a robust risk assessment. Organizations frequently don’t consider safety and security implications until their AI systems are already live, which is akin to closing the barn door after the horse has bolted. It is vital, therefore, to integrate an AI Trust, Risk, and Security Management (TRiSM) program at the outset. A well-structured TRiSM framework encompasses four crucial pillars: Explainability and Continuous Model Monitoring, Model Operations (ModelOps), AI Application Security, and Data Privacy. This proactive approach ensures that AI systems not only comply with existing regulations but also maintain data integrity, fairness, and reliability throughout their lifecycle.

There are six primary factors that underscore the critical need for implementing a TRiSM program in AI systems. Firstly, many stakeholders—be it managers, users, or consumers—often lack a deep understanding of AI, which creates an urgent need for clear and tailored explanations about how the AI models function, their strengths, weaknesses, and potential biases. Secondly, the democratization of AI tools like ChatGPT, while revolutionary, introduces new types of security risks that traditional controls can’t mitigate. Thirdly, the use of third-party AI tools can risk exposing confidential data, making it imperative to assess the security measures of external providers. These risks are further exacerbated by the need for ongoing monitoring and the evolving nature of regulatory landscapes, making a strong case for custom-developed TRiSM solutions that can be continuously updated.

Implementing an effective TRiSM program is not just about avoiding short-term pitfalls; it also lays the foundation for long-term success and resilience. Projections indicate that by 2026, organizations with a proactive approach to AI transparency, trust, and security will see a 50% improvement in user adoption, the realization of business goals, and stakeholder acceptance. Therefore, the argument for incorporating a comprehensive TRiSM framework from the get-go isn’t just compelling—it’s essential for any organization serious about leveraging AI technologies responsibly and successfully.

We have already failed to secure AI by doing what we did before – repeating the mistakes of the past

Venture in Security, September 6, 2023

Ensuring the safety and security of artificial intelligence (AI) and machine learning (ML) technologies has become a pressing issue, especially given the rapid adoption and integration of these tools into our digital and physical landscapes. This urgency is underlined by the growing number of companies specializing in AI and ML security. While the existence of these firms is promising, it also signals a collective failure in the tech industry to prioritize security from the get-go. This oversight mirrors past lapses in sectors like construction and early Internet development, where safety considerations were often retroactively addressed, leading to imperfect solutions and systemic vulnerabilities. In many cases, retrofitting is not only cost-prohibitive but also less effective in ensuring robustness.

A holistic approach to security is essential; it cannot simply be an afterthought or a box to tick. Many companies claim to prioritize security but often sacrifice it for speed-to-market and scalability. This short-term focus overlooks the enormous risks and long-term costs of insecure infrastructure. The parallel to this is found in the construction industry, where new building codes have been implemented for earthquake-resistant structures, but the retrofitting of old buildings remains a difficult and often neglected task. Like in construction, “security by design” in AI and ML is challenging but crucial. It requires more than piecemeal efforts; it needs to be integrated into the fabric of the technology itself.

The concept of “security by design” may slow down the time-to-market and the pace of innovation, but it’s a necessary trade-off. We’ve seen similar delays accepted in other industries like construction, where safety regulations have ultimately saved lives despite initially slowing down progress. As we stand on the threshold of a new era dominated by AI and ML technologies, it’s vital that we apply the lessons we’ve learned from previous technological advancements. Ignoring the integral role of security in the early stages of development will only set us up for a future where the foundational elements of these transformative technologies are inherently flawed and potentially dangerous.

What will it take to secure AI models? Breaking them.

The Hill, September 8, 2023

The demand for secure artificial intelligence (AI) is at an all-time high, and it’s easy to understand why. With legislative hearings focusing on AI oversight, large-scale hacker attacks on language models, and the emergence of malicious AI chatbots on the dark web—all within just a few months—it’s clear that AI has caught everyone’s attention. However, this intense scrutiny doesn’t necessarily signal an impending crisis; rather, it positions us to better secure AI systems, armed with the insights gained from understanding their vulnerabilities.

Skepticism around new technologies isn’t novel; it’s a natural byproduct of the unknown factors they introduce. Recall the initial reluctance organizations had toward adopting the internet; trust isn’t built overnight. This hesitancy is especially relevant now, as countries race to become AI superpowers. Such urgency has led to unprecedented collaborative efforts, involving everyone from researchers to policymakers, aimed at making AI security a priority rather than an add-on. This collective approach, more than ever, emphasizes that the task of securing AI carries significant implications. After all, AI touches various aspects of our daily lives—be it automotive technology, climate science, or food supply chain management—and any vulnerabilities could invite stringent regulatory measures.

The road to secure AI involves not just looking at the algorithms and data but scrutinizing the entire supporting infrastructure as well. Just as a building is more than its foundation and walls, AI security extends beyond the code and data it relies on. In cybersecurity, we often deconstruct systems to understand their weaknesses and come up with countermeasures. This tactic applies to AI as well; by “breaking” these models, we can assess and quantify different levels of risk. Beyond that, our broader IT ecosystems must also be robustly safeguarded, a challenge for which existing expertise and governance structures can be leveraged. Ultimately, the goal is a well-rounded understanding of AI, from its operational aspects to its integration into larger systems, to develop comprehensive security measures. The fixation on AI’s potential risks should not paralyze us but rather propel us toward creating a technology that is as secure as it is revolutionary.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post