Towards Secure AI Week 38 – The Race to Protect Emerging GenAI

Secure AI Weekly + Trusted AI Blog admin todaySeptember 23, 2024 33

Background
share close

Security leaders warn of new risks as AI drives code development in 83% of organisations

Tech Monitor, September 18, 2024

The study found that 83% of organizations are now using AI to generate code, but 92% of cybersecurity leaders worry about the potential risks associated with this technology. The pace of AI-powered development is outstripping the ability of security teams to manage and secure it effectively, with 66% admitting they cannot keep up. Emerging threats like “AI poisoning” and “model escape” add to the complexity, while a reliance on open-source code—used in 61% of applications—exacerbates vulnerabilities. Despite these risks, 72% of security professionals feel pressured to allow AI use to stay competitive, even though 63% have considered banning AI-generated code altogether.

Venafi’s report also highlights significant gaps in governance, with nearly half of companies lacking policies to ensure the safe use of AI in development. Furthermore, 63% of security leaders said they struggle to gain visibility into how AI is being deployed within their organizations. To mitigate these risks, Venafi advocates for the widespread adoption of code signing, a method that verifies the integrity and authenticity of code to prevent unauthorized or malicious modifications. As AI continues to drive innovation, organizations must prioritize stronger security measures to ensure that both AI-generated and open-source code do not introduce dangerous vulnerabilities.

Want to know how the bad guys attack AI systems? MITRE’S ATLAS can show you

CSO Online, September 20, 2024

MITRE, a nonprofit technology research organization, developed the Adversarial Threat Landscape for AI Systems (ATLAS). Modeled after the well-known MITRE ATT&CK framework, ATLAS serves as a “living knowledge base” that outlines adversarial tactics used to compromise AI systems. It is based on real-world attack scenarios and demonstrations from AI Red Teams and Security Groups. This resource is designed to help organizations in various activities, including security analysis, AI development, threat assessments, and red-teaming exercises, providing a comprehensive view of AI’s attack lifecycle—from reconnaissance to the final impact.

ATLAS covers a wide range of techniques bad actors may use to exploit AI, including reconnaissance efforts to find publicly available vulnerabilities, development of malicious resources, and establishing initial access to AI/ML systems through means like phishing, prompt injection, or model evasion. Once attackers gain access, they seek to maintain persistence, escalate their privileges, and evade detection, potentially using techniques such as poisoning training data or exploiting unsecured credentials. The framework also highlights how attackers might seek to compromise machine learning (ML) models or collect sensitive artifacts, emphasizing the growing importance of securing AI systems. Organizations can take proactive measures to safeguard their AI implementations and protect against emerging security challenges by understanding these threat vectors.

Adversarial attacks on AI models are rising: what should you do now?

Venture Beat, September 20, 2024

With more than 40% of organizations already reporting AI-related security incidents, the rapid expansion of AI presents an ever-growing threat surface that many companies struggle to protect. Attackers, including nation-states, use these methods to compromise critical systems, such as autonomous vehicles, where even minor alterations in input data, like road signs, can lead to catastrophic consequences. Safeguarding AI systems from such threats has become a top priority, especially as adversarial attacks grow in scale and complexity.

To combat these risks, organizations need to focus on strengthening key areas like data integrity, adversarial training, and API security. Data poisoning, where attackers corrupt training datasets to degrade performance or skew predictions, is a significant threat, particularly in sensitive industries like healthcare and finance. Implementing robust data governance practices and securing APIs from model-stealing attacks can help mitigate vulnerabilities. While adversarial training—using hostile examples to bolster model defenses—can enhance AI system resilience, it often comes with trade-offs, such as longer training times and reduced model accuracy. Despite these challenges, staying proactive with the right security strategies and tools is essential to protecting AI-driven innovations from adversarial attacks.

The dark side of AI democratization: You no longer need to be a hacker to hack

The Hill, September 21, 2024

AI-generated phishing schemes, malware, and hacking programs are increasingly accessible on the darknet, where powerful tools are sold to amateurs, raising significant security concerns. As more physical devices connect to the internet, the potential for these AI-enabled attacks to cause widespread damage grows. The threat of hackers exploiting AI for malicious purposes, including bypassing safeguards or using “prompt injection” techniques to manipulate AI systems like ChatGPT, further amplifies the urgency to strengthen AI security measures.

To combat these growing risks, the focus must be on enhancing cybersecurity through advanced AI-driven defensive tools. Companies like Google, OpenAI, and Microsoft have implemented guardrails to prevent AI misuse, but bad actors are continuously finding ways to circumvent these protections. Hackers are building alternative AI models, such as FraudGPT and WormGPT, to craft phishing emails and engage in cyberattacks. While enforcing regulations on AI misuse is necessary, overly restricting open-source AI models may hinder innovation. The solution lies in using AI to defend against cyber threats by improving network monitoring, threat detection, and proactive response capabilities. 

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post