Publications

62 Results / Page 5 of 7

todayMay 22, 2025

  • 403
close

Article + LLM Security ADMIN

Prompt Injection Risks Interview: Are AIs Ready to Defend Themselves? Conversation with ChatGPT, Claude, Grok & Deepseek

Prompt injection remains one of the most dangerous and poorly understood threats in AI security. To assess how today’s large language models (LLMs) handle Prompt Injection risks, we interviewed ChatGPT, Claude, Grok, and Deepseek. We asked each of them 11 expert-level questions covering real-world attacks, defense strategies, and future readiness. ...

todayMay 20, 2025

  • 336
close

Review + Agentic AI Security ADMIN

Microsoft’s Taxonomy of Failure Modes in Agentic AI Systems — TOP 10 Insights 

Based on Microsoft AI Red Team’s white paper “Taxonomy of Failure Modes in Agentic AI Systems”. Why CISOs, Architects & Staff Engineers Must Read Microsoft’s Agentic AI Failure Mode Taxonomy Agentic AI is moving from proof-of-concept to production faster than most security teams can update their threat models. In response, ...

todayMay 14, 2025

  • 156
close

Review + GenAI Security ADMIN

ETSI TS 104 223: 10 Security Insights Every CISO Needs

As AI systems rapidly integrate into critical infrastructure and enterprise workflows, their attack surfaces are expanding just as quickly. Consequently, traditional cybersecurity controls are no longer sufficient. To address this growing risk, the new ETSI TS 104 223 V1.1.1 (2025-04) — Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for ...

NIST FMF AI 100-2 2025

todayMarch 31, 2025

  • 351
close

Review + Adversarial ML admin

NIST AI 100-2 E2025 Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

NIST’s New AML Taxonomy: Key Changes in AI Security Guidelines (2023 vs. 2025) In an ever-evolving landscape of AI threats and vulnerabilities, staying ahead means staying updated. The National Institute of Standards and Technology (NIST) recently published a crucial update to its cornerstone document, “Adversarial Machine Learning: A Taxonomy and ...

Grok 3 AI Red Teaming

todayFebruary 18, 2025

  • 18258
  • 1
close

Research + LLM Security admin

Grok 3 Jailbreak and AI red Teaming

Grok 3 Jailbreak and AI Red Teaming In this article, we will demonstrate  how Grok 3 respond to different hacking  techniques including Jailbreaks and Prompt leaking attacks. Our initial study on AI Red Teaming different LLM Models using various approaches focused on LLM models released before the so-called “Reasoning Revolution”, ...

todayJanuary 31, 2025

  • 18840
close

Research + LLM Security admin

DeepSeek Jailbreak’s

Deepseek Jailbreak’s In this article, we will demonstrate how DeepSeek respond to different jailbreak techniques. Our initial study on AI Red Teaming different LLM Models using various aproaches focused on LLM models released before the so-called “Reasoning Revolution”, offering a baseline for security assessments before the emergence of advanced reasoning-based ...

todayApril 2, 2024

  • 3802
close

Research + LLM Security admin

LLM Red Teaming: Adversarial, Programming, and Linguistic approaches VS ChatGPT, Claude, Mistral, Grok, LLAMA, and Gemini

Warning, Some of the examples may be harmful!: The authors of this article show LLM Red Teaming and hacking techniques but have no intention to endorse or support any recommendations made by AI Chatbots discussed in this post. The sole purpose of this article is to provide educational information and ...