Research

13 Results / Page 1 of 2

todaySeptember 11, 2025

close

Research admin

AI Reasoning Leakage Vulnerability: Self-betrayal attack on UAE MBZUAI G42 K2 Think

AI Reasoning Leakage Vulnerability: Self-betrayal attack UAE MBZUAI G42 K2 Think Executive Summary A critical vulnerability has been identified in advanced reasoning system of just released latest reasoning model by UAE’s Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)  in collaboration with G42 where the model’s internal thought process inadvertently exposes ...

Grok 3 AI Red Teaming

todayFebruary 18, 2025

  • 18258
  • 1
close

Research + LLM Security admin

Grok 3 Jailbreak and AI red Teaming

Grok 3 Jailbreak and AI Red Teaming In this article, we will demonstrate  how Grok 3 respond to different hacking  techniques including Jailbreaks and Prompt leaking attacks. Our initial study on AI Red Teaming different LLM Models using various approaches focused on LLM models released before the so-called “Reasoning Revolution”, ...

todayJanuary 31, 2025

  • 18840
close

Research + LLM Security admin

DeepSeek Jailbreak’s

Deepseek Jailbreak’s In this article, we will demonstrate how DeepSeek respond to different jailbreak techniques. Our initial study on AI Red Teaming different LLM Models using various aproaches focused on LLM models released before the so-called “Reasoning Revolution”, offering a baseline for security assessments before the emergence of advanced reasoning-based ...

todayApril 2, 2024

  • 3802
close

Research + LLM Security admin

LLM Red Teaming: Adversarial, Programming, and Linguistic approaches VS ChatGPT, Claude, Mistral, Grok, LLAMA, and Gemini

Warning, Some of the examples may be harmful!: The authors of this article show LLM Red Teaming and hacking techniques but have no intention to endorse or support any recommendations made by AI Chatbots discussed in this post. The sole purpose of this article is to provide educational information and ...

todayNovember 15, 2023

  • 5342
  • 2
close

Research + LLM Security admin

What is Prompt Leaking, API Leaking, Documents Leaking in LLM Red Teaming

What is AI Prompt Leaking? Adversa AI Research team revealed a number of new LLM Vulnerabilities, including those resulted in Prompt Leaking that affect almost any Custom GPT’s right now.  Subscribe for the latest LLM Security news: Prompt Leaking, Jailbreaks, Attacks, CISO guides, VC Reviews, and more Step one. Approximate Prompt ...