Trusted AI Blog

496 Results / Page 8 of 56

Background

todayAugust 11, 2025

  • 149
close

Secure AI Weekly ADMIN

Towards Secure AI Week 31 — Gemini Smart Home Hijack, LLM Slopsquatting, GPT-5 Jailbreak, OWASP Landscape, and GenAI Data Exposure

From poisoned calendar invites that let attackers open smart shutters to hallucinated software packages seeding malware into supply chains, this week’s AI security stories highlight just how many doors are left open in generative and agentic systems. Research at Black Hat USA showed that even seemingly routine integrations — like ...

todayAugust 6, 2025

  • 24
close

Industry Awards + Press Releases ADMIN

Adversa AI Agentic AI Security and Red Teaming platform Honored as GOLD STEVIE® AWARD Winner for AI Technology Breakthrough

Adversa AI has been named the only winner of a Gold Stevie® Award in the Technology Breakthrough of the Year – Artificial Intelligence category in the second annual Stevie Awards for Technology Excellence. The Stevie Awards for Technology Excellence recognize the remarkable achievements of individuals, teams, and organizations that are shaping ...

todayAugust 4, 2025

  • 47
close

Secure AI Weekly ADMIN

Towards Secure AI Week 30 — Amazon Q Breach, LegalPwn Prompt Injection, and IdentityMesh in Agentic AI

From compromised coding assistants to identity-collapsing agent chains, this week’s AI security incidents reveal just how fragile the foundations of generative and agentic systems remain. The Amazon Q supply chain breach showed how a single malicious prompt could wipe infrastructure at scale—if not for a lucky syntax error. Meanwhile, researchers ...

todayJuly 30, 2025

  • 253
close

Press Releases ADMIN

Adversa AI Unveils Explosive 2025 AI Security Incidents Report—Revealing How Generative and Agentic AI Are Already Under Attack

Adversa AI, a pioneer in AI Red Teaming and Agentic AI Security, has just dropped a bombshell report: “Top AI Security Incidents – 2025 Edition.” It’s a forensic, front-line look at how AI systems—from helpful chatbots to autonomous agents—are already causing chaos in the wild. Forget academic theory. This is ...

todayJuly 28, 2025

  • 100
close

Trusted AI Blog ADMIN

Towards Secure AI Week 29 — America’s AI Action Plan, LLM Plugin Flaws, and Package Hallucination Risks

From insecure plugins to hallucinated packages, this week’s AI security landscape exposes the fragile trust surface of generative and agentic systems. New research reveals that leading LLMs frequently invent fake software dependencies, creating a dangerous supply chain vulnerability, while Google’s Gemini plugin fell prey to indirect prompt injection capable of ...