Publications

55 Results / Page 4 of 7

todayMay 29, 2025

  • 140
close

Review + LLM Security ADMIN

ICIT Securing AI: Addressing the OWASP Top 10 for Large Language Model Applications — TOP 10 insights

The Institute for Critical Infrastructure Technology (ICIT) has published a new report that connects the OWASP-LLM Top 10 risks with real-world AI security practices. This is more than just a list of threats. It is a practical guide designed to help teams secure large language models (LLMs) in real-world systems. ...

todayMay 22, 2025

  • 403
close

Article + LLM Security ADMIN

Prompt Injection Risks Interview: Are AIs Ready to Defend Themselves? Conversation with ChatGPT, Claude, Grok & Deepseek

Prompt injection remains one of the most dangerous and poorly understood threats in AI security. To assess how today’s large language models (LLMs) handle Prompt Injection risks, we interviewed ChatGPT, Claude, Grok, and Deepseek. We asked each of them 11 expert-level questions covering real-world attacks, defense strategies, and future readiness. ...

todayMay 20, 2025

  • 336
close

Review + Agentic AI Security ADMIN

Microsoft’s Taxonomy of Failure Modes in Agentic AI Systems — TOP 10 Insights 

Based on Microsoft AI Red Team’s white paper “Taxonomy of Failure Modes in Agentic AI Systems”. Why CISOs, Architects & Staff Engineers Must Read Microsoft’s Agentic AI Failure Mode Taxonomy Agentic AI is moving from proof-of-concept to production faster than most security teams can update their threat models. In response, ...

todayMay 14, 2025

  • 156
close

Review + GenAI Security ADMIN

ETSI TS 104 223: 10 Security Insights Every CISO Needs

As AI systems rapidly integrate into critical infrastructure and enterprise workflows, their attack surfaces are expanding just as quickly. Consequently, traditional cybersecurity controls are no longer sufficient. To address this growing risk, the new ETSI TS 104 223 V1.1.1 (2025-04) — Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for ...

NIST FMF AI 100-2 2025

todayMarch 31, 2025

  • 351
close

Review + Adversarial ML admin

NIST AI 100-2 E2025 Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

NIST’s New AML Taxonomy: Key Changes in AI Security Guidelines (2023 vs. 2025) In an ever-evolving landscape of AI threats and vulnerabilities, staying ahead means staying updated. The National Institute of Standards and Technology (NIST) recently published a crucial update to its cornerstone document, “Adversarial Machine Learning: A Taxonomy and ...

Grok 3 AI Red Teaming

todayFebruary 18, 2025

  • 18258
  • 1
close

Research + LLM Security admin

Grok 3 Jailbreak and AI red Teaming

Grok 3 Jailbreak and AI Red Teaming In this article, we will demonstrate  how Grok 3 respond to different hacking  techniques including Jailbreaks and Prompt leaking attacks. Our initial study on AI Red Teaming different LLM Models using various approaches focused on LLM models released before the so-called “Reasoning Revolution”, ...