Adversa AI Wins “Most Innovative Agentic AI Security” at Global InfoSec Awards During RSA Conference 2026

Industry Awards admin todayMarch 31, 2026

Background
share close

Recognized among hundreds of vendors for advancing Continuous AI Red Teaming and Agentic AI security

SAN FRANCISCO, March 31, 2026 — Adversa AI announced today that it has been named “Most Innovative Agentic AI Security” at the Global InfoSec Awards during RSA Conference 2026, one of the cybersecurity industry’s most competitive and recognized award programs. The Global InfoSec Awards are judged by leading cybersecurity experts and recognize companies demonstrating innovation, effectiveness, and real-world impact. Adversa AI was selected among hundreds of cybersecurity vendors worldwide for its Agentic AI security platform, which helps organizations continuously test and secure GenAI applications, autonomous AI agents, and tool-connected AI workflows. It continuously stress-tests AI agents, GenAI applications, and MCP-based architectures to identify vulnerabilities before deployment. Developed by Adversa AI, which leads the CoSAI Agentic AI Security workstream and serves as a core member of OWASP AIVSS, the platform maps assessments directly to the frameworks its team helps author. The company’s AI security research has been covered by The Wall Street Journal, Wired, and TechCrunch, and the team is behind SecureClaw, one of the most widely adopted open-source security frameworks for AI agents.

Daniel Rubinstein receives Most Innovative Agentic AI Security award won by Adversa AI

Why CISOs care

Enterprises are deploying AI agents into production faster than security teams can evaluate them. Gartner projects that by 2028, more than 33% of enterprise applications will incorporate agentic AI, up from less than 1% in 2024, creating an attack surface where prompt injection, tool misuse, and unauthorized data access go undetected until exploitation.

“AI agents make autonomous decisions, call external tools, and chain actions across systems in ways traditional testing cannot reach. A true AI red teaming platform must think like an attacker — with the depth of adversarial research our team contributes to NIST and CSA standards. This recognition confirms the industry sees the urgency,”
said Alex Polyakov, co-founder and CTO of Adversa AI 

What sets Adversa AI platform apart

The platform enables organizations to continuously test AI systems against real attacker techniques, detect vulnerabilities such as prompt injection, goal hijacking, and tool misuse, validate agent behavior across multi-step workflows, and identify risks in AI integrations with APIs, tools, and external systems before production deployment. Reports align with the OWASP AI Vulnerability Scoring System. The platform extends automated red teaming to enterprise-scale environments in financial services, insurance, and government.

Learn more about the Adversa AI Red Teaming Platform.

Written by: admin

Rate it
Previous post