Agentic AI Security Resources – December 2025
As AI agents become increasingly autonomous—browsing the web, executing code, and making decisions with minimal human oversight—the security landscape is rapidly evolving. Agentic AI introduces new attack surfaces, from prompt injection vulnerabilities to over-privileged tool access. This digest compiles the most critical resources from December 2025 to help security professionals stay ahead of emerging threats and build robust defenses for autonomous AI systems.
Statistics
Total Resources: 19
Breakdown by Category:
- Agentic AI Security Framework / Guide: 3 (15.8%)
- Agentic AI Security Research: 3 (15.8%)
- Agentic AI Vulnerability: 3 (15.8%)
- Agentic AI Defense: 2 (10.5%)
- Agentic AI Threat Model: 2 (10.5%)
- Agentic AI Attack: 1 (5.3%)
- Agentic AI Incident: 1 (5.3%)
- Agentic AI RedTeaming: 1 (5.3%)
- Agentic AI Security Training: 1 (5.3%)
- Agentic AI Security 101: 1 (5.3%)
- A CISO’s Guide to Agentic AI: 1 (5.3%)
Content
Agentic AI Security Framework / Guide
CyberArk explores how identity management serves as the cornerstone for securing autonomous AI systems. The white paper outlines strategies for implementing identity-based controls in agentic AI deployments.
This research paper proposes a comprehensive framework addressing both safety and security concerns in production agentic AI deployments. It provides practical guidelines for organizations implementing autonomous AI agents.
A LinkedIn post sharing initial thoughts on building a security framework for agentic AI, including considerations around machine unlearning. Offers a practitioner’s perspective on emerging security challenges.
Agentic AI Security Research
Research into prompt injection attacks targeting AI browser agents, with proposed defensive mechanisms. The paper analyzes how malicious web content can manipulate AI agents during browsing tasks.
A comparative evaluation of security properties across different agentic AI communication protocols. Identifies vulnerabilities in how AI agents exchange information and coordinate actions.
Examines how synthetic data can be used to optimize attacks against AI agents. Provides insights into adversarial techniques and implications for red team testing.
Agentic AI Vulnerability
Lakera’s analysis of critical vulnerabilities stemming from excessive tool permissions and unrestricted web browsing capabilities in AI agents. Part of a series exploring the agentic AI threat landscape.
NVIDIA’s developer blog explores the security risks introduced when AI agents have code execution capabilities. Highlights attack vectors and mitigation strategies for sandboxing execution environments.
Research demonstrating targeted data poisoning attacks against AI-powered fact-checking systems. Shows how adversaries can manipulate agent behavior through carefully crafted training data contamination.
Agentic AI Defense
AWS introduces a comprehensive matrix for scoping security requirements of autonomous AI systems. Provides a structured approach to identifying and prioritizing security controls.
A systematic evaluation of defense frameworks focused on Indirect Prompt Injection (IPI) attacks against LLM agents. Includes taxonomy of defenses and exploitation techniques to test their effectiveness.
Agentic AI Threat Model
SysAid examines the specific risks introduced by AI-powered browser agents and proposes governance rules. Addresses concerns around data leakage, unauthorized actions, and compliance.
An overview of the most significant security threats in the agentic AI space. Covers prompt injection, tool abuse, data exfiltration, and other attack vectors practitioners should monitor.
Agentic AI Incident
Anthropic discloses details of a sophisticated espionage campaign that weaponized AI coding assistants. Documents the attack chain and defensive measures taken to disrupt the operation.
Agentic AI Attack
Official CVE entry in the National Vulnerability Database documenting a critical vulnerability in agentic AI systems. Includes severity rating, affected components, and remediation guidance.
Agentic AI RedTeaming
Introduces an automated red teaming agent specifically designed to test code-generating AI agents. Demonstrates automated discovery of vulnerabilities across multiple code agent architectures.
Agentic AI Security Training
Trail of Bits’ hands-on security training labs for learning to exploit insecure AI agents. Provides practical exercises covering common vulnerabilities and attack techniques.
Agentic AI Security 101
EPAM’s comprehensive introductory guide to agentic AI security fundamentals. Covers core concepts, common threats, and essential security practices for beginners.
A CISO’s Guide to Agentic AI
Proofpoint’s forward-looking analysis of how agentic AI will reshape the cybersecurity landscape. Offers strategic guidance for CISOs preparing their organizations for autonomous AI adoption.
Quick Outro
The agentic AI security landscape is evolving rapidly, with new vulnerabilities, attack techniques, and defensive frameworks emerging monthly. Whether you’re building AI agents, defending against AI-powered attacks, or developing security policies, staying informed is critical. Bookmark these resources, share with your security teams, and continue monitoring this space as autonomous AI capabilities expand.