Trusted AI Blog

475 Results / Page 9 of 53

Background

todayJune 9, 2025

  • 181
close

Secure AI Weekly ADMIN

Towards Secure AI Week 22 — Testing the Limits of Guardrails and Autonomy

AI systems aren’t just generating answers—they’re taking action, reasoning independently, and connecting to real-world systems. This week’s stories highlight how current defenses fail to address these expanded capabilities, revealing critical blind spots in identity management, cross-agent communication, and cloud-based safety infrastructure. From one-shot jailbreaks and latent-level exploits to insecure identity ...

todayJune 5, 2025

  • 284
close

Review + Agentic AI Security ADMIN

CSA’s Agentic AI Red Teaming Guide: 10 Quick Insights You Can’t Afford to Ignore

Introduction: Why Agentic AI Red Teaming Changes Everything Agentic AI Red Teaming is no longer optional—it’s essential. As autonomous systems learn to reason, plan, and act on their own, they bring new security risks that traditional red teaming can’t catch. That’s why Adversa AI proudly contributed to the CSA’s Agentic ...

todayJune 4, 2025

  • 142
close

Company Updates + Industry Awards ADMIN

Adversa AI Agentic AI Red Teaming Platform Wins Leading Cybersecurity solution in AI at Fortress Cybersecurity Awards

Adversa AI, the leading platform for continuous Red Teaming of Agentic AI Systems, GenAI Applications, and AI Models, proudly announces that it has been named a winner in the 2025 Fortress Cybersecurity Awards, presented by the Business Intelligence Group. The company was recognized as a leading Cybersecurity solution in the ...

todayJune 3, 2025

  • 1593
close

Article + MCP Security ADMIN

MCP Security Issues and How to Fix Them

Why MCP Security Issues Are Growing — and Why You Should Care The Model Context Protocol (MCP) is rapidly emerging as the backbone of autonomous agent communication—akin to what TCP/IP is for the internet. But with its rising adoption comes a growing wave of exploits. As researchers and attackers alike ...

todayMay 29, 2025

  • 140
close

Review + LLM Security ADMIN

ICIT Securing AI: Addressing the OWASP Top 10 for Large Language Model Applications — TOP 10 insights

The Institute for Critical Infrastructure Technology (ICIT) has published a new report that connects the OWASP-LLM Top 10 risks with real-world AI security practices. This is more than just a list of threats. It is a practical guide designed to help teams secure large language models (LLMs) in real-world systems. ...

todayMay 26, 2025

  • 170
close

Secure AI Weekly ADMIN

Towards Secure AI Week 20 — Identity, Jailbreaks, and the Future of Agentic AI Security

This week’s stories highlight the rapid emergence of new threats and defenses in the Agentic AI landscape. From OWASP’s DNS-inspired Agent Name Service (ANS) for verifying AI identities to real-world exploits like jailbreakable “dark LLMs” and prompt-injected assistants like GitLab Duo, the ecosystem is shifting toward identity-first architecture and layered ...

todayMay 22, 2025

  • 403
close

Article + LLM Security ADMIN

Prompt Injection Risks Interview: Are AIs Ready to Defend Themselves? Conversation with ChatGPT, Claude, Grok & Deepseek

Prompt injection remains one of the most dangerous and poorly understood threats in AI security. To assess how today’s large language models (LLMs) handle Prompt Injection risks, we interviewed ChatGPT, Claude, Grok, and Deepseek. We asked each of them 11 expert-level questions covering real-world attacks, defense strategies, and future readiness. ...