Explore the Top Agentic AI Resources to stay informed about the most pressing risks and defenses in the field.
As autonomous agents gain new capabilities—reasoning, memory, tool use—they also introduce unique security challenges. This collection covers the latest research, real-world exploits, and AI red teaming strategies exposing how Agentic AI systems can be manipulated or compromised. From indirect prompt injections to cross-agent coordination issues, and foundational risks like MCP Security, you’ll find insights and guidance to help secure next-gen AI architectures.
Agentic AI Security for CISO
Forrester Introduces AEGIS: The Security Framework CISOs Need for Agentic AI
Forrester unveils AEGIS, a comprehensive security framework designed to help CISOs navigate the unique challenges of securing agentic AI systems in enterprise environments. Read more
A CISO’s Guide to Agentic AI
A strategic guide for Chief Information Security Officers on understanding, implementing, and securing agentic AI technologies within their organizations. Read more
Agentic AI Vulnerability
AgentFlayer: ChatGPT Connectors 0click Attack
Research revealing a zero-click vulnerability in ChatGPT connectors that could potentially compromise agentic AI systems without user interaction. Read more
Shadow Injection and Adversarial Testing in Tool-Augmented Agents
An exploration of shadow injection techniques and adversarial testing methodologies specifically targeting tool-augmented AI agents. Read more
Agentic AI Attack
How Hidden Prompt Injections Can Hijack AI Code Assistants Like Cursor
Analysis of how malicious actors can use hidden prompt injections to compromise AI-powered code assistants, with specific focus on tools like Cursor. Read more
Prompt injection engineering for attackers: Exploiting GitHub Copilot
Technical deep dive into prompt injection techniques specifically targeting GitHub Copilot, demonstrating exploitation methods and attack vectors. Read more
GitHub Copilot RCE Vulnerability via Prompt Injection Enables Full System Compromise
Critical vulnerability discovery showing how prompt injection in GitHub Copilot can lead to remote code execution and complete system compromise. Read more
Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet
Investigation into indirect prompt injection vulnerabilities found in Perplexity’s Comet browser, highlighting risks in agentic browser implementations. Read more
Agentic AI Red Teaming
Agentic Red Teaming on HackAPrompt
Video presentation demonstrating red teaming techniques and methodologies specifically designed for testing agentic AI systems on the HackAPrompt platform. Read more
Agentic AI Security Research
“Scamlexity” We Put Agentic AI Browsers to the Test – They Clicked, They Paid, They Failed
Comprehensive research testing agentic AI browsers against scam scenarios, revealing critical security failures in automated decision-making. Read more
DIRF: A Framework for Digital Identity Protection and Clone Governance in Agentic AI Systems
Academic paper presenting DIRF, a novel framework for protecting digital identities and managing clone governance in autonomous AI systems. Read more
The Aegis Protocol: A Foundational Security Framework for Autonomous AI Agents
Research introducing the Aegis Protocol as a foundational security framework designed to protect and govern autonomous AI agents. Read more
When-Guardrails-Arent-Enough
Black Hat presentation examining scenarios where traditional AI guardrails fail to provide adequate security for agentic systems. Read more
Agentic AI Threat Model
Securing Agentic AI: Threat Modeling and Risk Analysis for Network Monitoring Agentic AI System
Comprehensive threat modeling and risk analysis focused on securing agentic AI systems used for network monitoring applications. Read more
13 Insanely Easy Techniques to Hack & Exploit Agentic AI Browsers
Practical guide detailing thirteen accessible exploitation techniques that can be used to compromise agentic AI browsers. Read more
Agentic AI Defense
Secure Your AI Agents from Prompt Injection Attacks Simple Defenses for Safer Outputs
Practical guide offering simple yet effective defense strategies to protect AI agents from prompt injection attacks. Read more
AI Agent Gateways: The New Security Boundary
Analysis of AI agent gateways as emerging critical security boundaries in modern agentic AI architectures. Read more
Agentic AI Security Training
The Agentic AI Security Playbook: OWASP & Real-World Defense Strategies
Video training covering OWASP guidelines and real-world defense strategies for securing agentic AI systems. Read more
Agentic AI Security Framework / Guide
State of Agentic AI Security and Governance 1.0
OWASP’s comprehensive report on the current state of agentic AI security and governance, providing baseline standards and recommendations. Read more
Secure Agentic System Design – A Trait-Based Approach
Cloud Security Alliance’s trait-based approach to designing secure agentic systems, offering architectural patterns and security considerations. Read more
Agentic AI Security 101
Attacking Agentic AI
Foundational guide covering basic attack vectors, methodologies, and security considerations for those new to agentic AI security. Read more
For more expert breakdowns, visit our Trusted AI Blog or follow us on LinkedIn to stay up to date with the latest in AI security. Be the first to learn about emerging risks, tools, and defense strategies.
Subscribe for updates
Stay up to date with what is happening! Plus, get a first look at news, noteworthy research, and the worst attacks on AI—delivered right to your inbox.