Towards Secure AI Week 47 – New OWASP Top 10 for LLMs

Secure AI Weekly + Trusted AI Blog admin todayNovember 24, 2024 8

Background
share close

OWASP Reveals Updated 2025 Top 10 Risks for LLMs, Announces New LLM Project Sponsorship Program and Inaugural Sponsors

OWASP, November 17, 2024

The OWASP Foundation has unveiled a refreshed OWASP Top 10 for LLM Applications and Generative AI Project, emphasizing the need for robust security in the development, deployment, and management of large language models (LLMs) and generative AI. This update highlights critical risks, vulnerabilities, and mitigations to safeguard AI applications, including static prompts, agentic systems, and embedding-based methods like Retrieval-Augmented Generation (RAG). Key additions include addressing System Prompt Leakage, where assumedly secure prompts inadvertently expose sensitive data, and expanding Excessive Agency to cover the risks of autonomous AI systems with limited human oversight. Alongside these advancements, the OWASP sponsorship program invites organizations to contribute directly to research and education, ensuring the project’s continued role in securing AI technologies.

The project offers actionable resources to help organizations tackle challenges like Unbounded Consumption—which addresses resource management and scaling costs—and emerging vulnerabilities across the AI lifecycle. By fostering collaboration between developers, security experts, and industry leaders, the OWASP Top 10 aims to create a transparent and resilient AI ecosystem. Organizations can showcase their commitment to security while gaining valuable insights into evolving threats, ultimately strengthening trust in AI as it transforms industries worldwide.

Groundbreaking Framework for the Safe and Secure Deployment of AI in Critical Infrastructure Unveiled by Department of Homeland Security

Homeland Security, November 14, 2024

The Department of Homeland Security (DHS) has introduced the Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure, a groundbreaking initiative to ensure the safe and secure adoption of AI in critical sectors like energy, transportation, and communications. This framework, developed collaboratively by public and private sector experts, identifies key responsibilities across the AI supply chain—from cloud providers and AI developers to critical infrastructure operators and civil society organizations. With input from the DHS-established Artificial Intelligence Safety and Security Board, the framework addresses vulnerabilities such as attacks on AI systems, misuse of AI technologies, and implementation flaws. It highlights the need for practices like strong access controls, privacy safeguards, and evaluations for bias and security risks, aiming to protect interconnected systems from failures or malicious exploitation.

If widely adopted, the framework could harmonize safety protocols, boost public trust, and improve resilience in critical services essential to daily life. It encourages developers to adopt secure design principles, operators to maintain rigorous cybersecurity practices, and governments to advance AI safety standards while fostering global cooperation. DHS Secretary Alejandro N. Mayorkas emphasized the transformative potential of AI, urging stakeholders to adopt these guidelines to safeguard essential services such as power, water, and internet access. By addressing current risks and guiding responsible innovation, the framework lays the foundation for AI to strengthen critical infrastructure while minimizing potential harms, ensuring a safer, more secure future for all.

Majority of firms using generative AI experience related security incidents – even as it empowers security teams

ITPro, November, 2024

According to the Capgemini Research Institute, 97% of organizations using generative AI have faced data breaches or related security concerns, with over half reporting losses exceeding $50 million. Key risks include data poisoning, sensitive information leaks, and vulnerabilities in custom AI solutions, compounded by employee misuse. Additionally, generative AI introduces dangers like deepfakes and biased or harmful content, with 43% of organizations experiencing financial losses as a result. To address these challenges, 62% of companies recognize the need for increased cybersecurity budgets and more robust risk management strategies.

Despite these risks, generative AI offers powerful tools for strengthening cybersecurity. It allows organizations to analyze large datasets, detect threats faster, and reduce remediation times. Over 60% of surveyed companies reported quicker threat detection after integrating AI into their Security Operations Centers (SOCs). Moreover, generative AI is seen as pivotal for proactive defense strategies, enabling analysts to focus on complex threats while improving overall resilience. However, to fully leverage these benefits, organizations must prioritize ethical AI implementation, establish strong data management frameworks, and invest in employee training to navigate the evolving security landscape effectively.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post