ADMIN

9 Results / Page 1 of 1

A humorous illustration showing ANS verifying AI agents with PKI while a spaghetti monster claims “I identify as a microservice”

todayMay 26, 2025

  • 110
close

Secure AI Weekly ADMIN

Towards Secure AI Week 20 — Identity, Jailbreaks, and the Future of Agentic AI Security

This week’s stories highlight the rapid emergence of new threats and defenses in the Agentic AI landscape. From OWASP’s DNS-inspired Agent Name Service (ANS) for verifying AI identities to real-world exploits like jailbreakable “dark LLMs” and prompt-injected assistants like GitLab Duo, the ecosystem is shifting toward identity-first architecture and layered ...

Cyberpunk-style illustration of AI systems like ChatGPT, Claude, Grok, and Deepseek facing prompt injection attacks

todayMay 22, 2025

  • 302
close

Articles ADMIN

Prompt Injection Risks Interview: Are AIs Ready to Defend Themselves? Conversation with ChatGPT, Claude, Grok & Deepseek

Prompt injection remains one of the most dangerous and poorly understood threats in AI security. To assess how today’s large language models (LLMs) handle Prompt Injection risks, we interviewed ChatGPT, Claude, Grok, and Deepseek. We asked each of them 11 expert-level questions covering real-world attacks, defense strategies, and future readiness. ...

Abstract AI security background with glitch effects and shield symbol, representing trust and resilience in generative AI.

todayMay 19, 2025

  • 142
close

Secure AI Weekly ADMIN

Towards Secure AI Week 19 —  AI Agents Under Attack, Evaluation Becomes Strategy

This week’s stories highlight a critical evolution in AI risk: the shift from isolated agent failures to system-level compromise in Agentic AI architectures and memory-based applications. From Princeton’s demonstration of cryptocurrency theft via false memory injection to Fortnite’s AI Darth Vader being manipulated into swearing within an hour of launch, ...

Abstract AI security background with glitch effects and shield symbol, representing trust and resilience in generative AI.

todayMay 14, 2025

  • 113
close

Articles ADMIN

ETSI TS 104 223: 10 Security Insights Every CISO Needs

As AI systems rapidly integrate into critical infrastructure and enterprise workflows, their attack surfaces are expanding just as quickly. Consequently, traditional cybersecurity controls are no longer sufficient. To address this growing risk, the new ETSI TS 104 223 V1.1.1 (2025-04) — Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for ...

Abstract AI security background with glitch effects and shield symbol, representing trust and resilience in generative AI.

todayMay 12, 2025

  • 169
close

Secure AI Weekly ADMIN

Towards Secure AI Week 18 — LLM Jailbreaks Hit New Highs, AI Security Market Accelerates

As LLMs become embedded across enterprise applications, new red-teaming research shows jailbreak success rates surpassing 87% on models like GPT-4—even under safety-aligned settings. Techniques such as multi-turn roleplay, token-level obfuscation, and cross-model attacks continue to outpace current safeguards. Meanwhile, insider misuse and unfiltered GenAI outputs pose growing risks, prompting calls ...

A hooded hacker with a laptop launches a digital attack on a glowing AI figure, while warning signs and worried AI faces appear in the background

todayApril 30, 2025

  • 573
close

Articles ADMIN

Agentic AI Security: Key Threats, Attacks, and Defenses

Artificial Intelligence has entered a new phase. No longer limited to generating text or analyzing data, AI systems can now take initiative. Meet Agentic AI—autonomous systems capable of making decisions, interacting with APIs, browsing the web, updating spreadsheets, sending emails, and executing code which means we need an Agentic AI ...