Publications

58 Results / Page 4 of 7

todayJune 10, 2025

  • 570
close

Article + Agentic AI Security ADMIN

Agentic AI Red Teaming Interview: Can Autonomous Agents Handle Adversarial Testing? Conversation with ChatGPT, Claude, Grok & Deepseek

As AI systems shift from passive responders to autonomous agents capable of planning, tool use, and long-term memory, they introduce new security challenges that traditional red teaming methods fail to address. To explore the current state of Agentic AI Red Teaming, we interviewed four leading language models—ChatGPT, Claude, Grok, and ...

todayJune 5, 2025

  • 284
close

Review + Agentic AI Security ADMIN

CSA’s Agentic AI Red Teaming Guide: 10 Quick Insights You Can’t Afford to Ignore

Introduction: Why Agentic AI Red Teaming Changes Everything Agentic AI Red Teaming is no longer optional—it’s essential. As autonomous systems learn to reason, plan, and act on their own, they bring new security risks that traditional red teaming can’t catch. That’s why Adversa AI proudly contributed to the CSA’s Agentic ...

todayJune 3, 2025

  • 1593
close

Article + MCP Security ADMIN

MCP Security Issues and How to Fix Them

Why MCP Security Issues Are Growing — and Why You Should Care The Model Context Protocol (MCP) is rapidly emerging as the backbone of autonomous agent communication—akin to what TCP/IP is for the internet. But with its rising adoption comes a growing wave of exploits. As researchers and attackers alike ...

todayMay 29, 2025

  • 140
close

Review + LLM Security ADMIN

ICIT Securing AI: Addressing the OWASP Top 10 for Large Language Model Applications — TOP 10 insights

The Institute for Critical Infrastructure Technology (ICIT) has published a new report that connects the OWASP-LLM Top 10 risks with real-world AI security practices. This is more than just a list of threats. It is a practical guide designed to help teams secure large language models (LLMs) in real-world systems. ...

todayMay 22, 2025

  • 403
close

Article + LLM Security ADMIN

Prompt Injection Risks Interview: Are AIs Ready to Defend Themselves? Conversation with ChatGPT, Claude, Grok & Deepseek

Prompt injection remains one of the most dangerous and poorly understood threats in AI security. To assess how today’s large language models (LLMs) handle Prompt Injection risks, we interviewed ChatGPT, Claude, Grok, and Deepseek. We asked each of them 11 expert-level questions covering real-world attacks, defense strategies, and future readiness. ...

todayMay 20, 2025

  • 336
close

Review + Agentic AI Security ADMIN

Microsoft’s Taxonomy of Failure Modes in Agentic AI Systems — TOP 10 Insights 

Based on Microsoft AI Red Team’s white paper “Taxonomy of Failure Modes in Agentic AI Systems”. Why CISOs, Architects & Staff Engineers Must Read Microsoft’s Agentic AI Failure Mode Taxonomy Agentic AI is moving from proof-of-concept to production faster than most security teams can update their threat models. In response, ...

todayMay 14, 2025

  • 156
close

Review + GenAI Security ADMIN

ETSI TS 104 223: 10 Security Insights Every CISO Needs

As AI systems rapidly integrate into critical infrastructure and enterprise workflows, their attack surfaces are expanding just as quickly. Consequently, traditional cybersecurity controls are no longer sufficient. To address this growing risk, the new ETSI TS 104 223 V1.1.1 (2025-04) — Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for ...