Articles

42 Results / Page 1 of 5

todayJune 10, 2025

  • 284
close

Agentic AI Security + Articles ADMIN

Agentic AI Red Teaming Interview: Can Autonomous Agents Handle Adversarial Testing? Conversation with ChatGPT, Claude, Grok & Deepseek

As AI systems shift from passive responders to autonomous agents capable of planning, tool use, and long-term memory, they introduce new security challenges that traditional red teaming methods fail to address. To explore the current state of Agentic AI Red Teaming, we interviewed four leading language models—ChatGPT, Claude, Grok, and ...

todayJune 5, 2025

  • 217
close

Agentic AI Security + Articles ADMIN

CSA’s Agentic AI Red Teaming Guide: 10 Quick Insights You Can’t Afford to Ignore

Introduction: Why Agentic AI Red Teaming Changes Everything Agentic AI Red Teaming is no longer optional—it’s essential. As autonomous systems learn to reason, plan, and act on their own, they bring new security risks that traditional red teaming can’t catch. That’s why Adversa AI proudly contributed to the CSA’s Agentic ...

todayMay 29, 2025

  • 121
close

Articles + LLM Security ADMIN

ICIT Securing AI: Addressing the OWASP Top 10 for Large Language Model Applications — TOP 10 insights

The Institute for Critical Infrastructure Technology (ICIT) has published a new report that connects the OWASP-LLM Top 10 risks with real-world AI security practices. This is more than just a list of threats. It is a practical guide designed to help teams secure large language models (LLMs) in real-world systems. ...

todayMay 22, 2025

  • 346
close

Articles + LLM Security ADMIN

Prompt Injection Risks Interview: Are AIs Ready to Defend Themselves? Conversation with ChatGPT, Claude, Grok & Deepseek

Prompt injection remains one of the most dangerous and poorly understood threats in AI security. To assess how today’s large language models (LLMs) handle Prompt Injection risks, we interviewed ChatGPT, Claude, Grok, and Deepseek. We asked each of them 11 expert-level questions covering real-world attacks, defense strategies, and future readiness. ...

todayMay 20, 2025

  • 229
close

Agentic AI Security + Articles ADMIN

Microsoft’s Taxonomy of Failure Modes in Agentic AI Systems — TOP 10 Insights 

Based on Microsoft AI Red Team’s white paper “Taxonomy of Failure Modes in Agentic AI Systems”. Why CISOs, Architects & Staff Engineers Must Read Microsoft’s Agentic AI Failure Mode Taxonomy Agentic AI is moving from proof-of-concept to production faster than most security teams can update their threat models. In response, ...

todayMay 14, 2025

  • 124
close

Articles + GenAI Security ADMIN

ETSI TS 104 223: 10 Security Insights Every CISO Needs

As AI systems rapidly integrate into critical infrastructure and enterprise workflows, their attack surfaces are expanding just as quickly. Consequently, traditional cybersecurity controls are no longer sufficient. To address this growing risk, the new ETSI TS 104 223 V1.1.1 (2025-04) — Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for ...