Towards Trusted AI Week 38 – The Cybersecurity Dilemmas of AI

Secure AI Weekly + Trusted AI Blog admin todaySeptember 21, 2023 44

Background
share close

Comply or Die: The Rise of the AI Governance Stack

Battery Ventures, September 13, 2023

While regulatory efforts are catching up, with the European Union leading the way and localized efforts in the United States filling the federal void, there is still much work to be done.

As the race to implement AI continues, companies must simultaneously prioritize establishing strong governance and safety measures. This is not just a requirement but a strategic imperative for long-term success. Technology stacks that manage AI governance and compliance are emerging to serve this need, bifurcating into two branches: one targeting technical roles responsible for creating the AI models, and another aimed at business and compliance roles focused on aligning these models with ethical and regulatory standards. These tools look at everything from data anonymization and secure model training to explainability and deployment safeguards.

Looking ahead, we anticipate the sector focused on AI governance and compliance tools to expand significantly. The stakes are high: companies face not just regulatory penalties but also the erosion of consumer trust and potential brand damage. As AI technologies become even more ingrained in business and society, ensuring they are developed and deployed responsibly will be paramount. Companies will increasingly seek out solutions that make their AI systems not just powerful and efficient, but also transparent, ethical, and secure.

AI Models Under Attack: Protecting Your Business From AI Cyberthreats

Forbes, September 13, 2023

Malicious actors are developing increasingly sophisticated methods to exploit vulnerabilities in AI systems, and the urgency for robust cybersecurity measures has never been greater.

A recent framework, known as ATLAS (Adversarial Threat Landscape for AI Systems), outlines the multitude of ways in which AI systems can be compromised. Among the most alarming are data poisoning attacks that manipulate AI training data, evasion attacks that exploit weaknesses in AI algorithms, and the outright theft of AI models. Additionally, the AI supply chain itself poses a risk, as compromised elements within it can lead to a security breach. Businesses are starting to feel the heat, with roughly 20% reporting attacks on their AI models over the past year. To counteract these vulnerabilities, organizations should employ a suite of protective measures, including but not limited to, adversarial training, secure data storage solutions, and multi-tiered threat detection systems.

To build a resilient defense against potential attacks, securing AI systems requires a multifaceted approach. Beyond just technical measures like encryption and secure coding, there is a critical need for educating employees about the unique risks associated with AI. Continuous monitoring and auditing of AI systems are equally essential for identifying and mitigating instances of abuse or manipulation. In essence, the security of AI should be an ongoing, proactive endeavor to ensure both the functionality and integrity of these groundbreaking technologies. As AI continues to transform industries and become more integrated into our daily lives, its security should be a priority, not an afterthought.

ChatGPT Jailbreaking Forums Proliferate in Dark Web Communities

DarkReading, September 12, 2023

As artificial intelligence (AI) technology evolves, so do the cybercriminals looking to exploit it. The hacking community is increasingly focusing on manipulating AI chatbots like ChatGPT to serve their illicit objectives. This new wave of activity often involves finding ways to bypass the ethical guidelines programmed into these AIs, also known as “jailbreaking.” Underground online forums are abuzz with activity as hackers try out different queries or “prompts” to make AI systems, particularly large language models (LLMs), deviate from their intended functions.

Prompt manipulation is a prevalent tactic in these forums. Hackers are in a constant cycle of altering and rephrasing their queries to see how far they can push the chatbots to break their safety rules—be it for generating malware or other malevolent tasks. These online communities act as hubs for sharing techniques for undermining AI functions. However, the challenge isn’t just to manipulate existing systems like ChatGPT; it’s also about programming new, custom LLMs specifically for malicious use. WormGPT is a stark example of this worrying trend. Marketed as an unrestricted version of GPT models, it is aimed at nefarious activities, such as business email compromise (BEC), phishing, and malware attacks.

According to cybersecurity experts, while these malicious applications are still in their infancy, they signal an inevitable evolution in cyber threats. Traditional defensive training may not be sufficient to counter these highly-targeted, sophisticated attacks. The consensus among cybersecurity professionals is that combating AI-assisted threats necessitates the deployment of AI-powered security measures. If defensive strategies don’t keep pace with these rapidly evolving threats, organizations may find themselves vulnerable to a new frontier of cybercrime, armed with AI-driven capabilities.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post