Publications

62 Results / Page 6 of 7

todayNovember 15, 2023

  • 5342
  • 2
close

Research + LLM Security admin

What is Prompt Leaking, API Leaking, Documents Leaking in LLM Red Teaming

What is AI Prompt Leaking? Adversa AI Research team revealed a number of new LLM Vulnerabilities, including those resulted in Prompt Leaking that affect almost any Custom GPT’s right now.  Subscribe for the latest LLM Security news: Prompt Leaking, Jailbreaks, Attacks, CISO guides, VC Reviews, and more Step one. Approximate Prompt ...

todayNovember 1, 2023

  • 122
close

Article + LLM Security admin

White House Executive Order On Safe And Secure AI: A Need For External AI Red Teaming

Why is it important? In recognition of AI’s transformative potential and the associated challenges, President Biden has taken the decisive step of issuing an Executive Order geared toward ensuring AI evolves safely, securely, and in the best interest of all Americans. Given the expansive impacts of AI, it’s pivotal that ...

todayMarch 15, 2023

  • 31153
close

Research + LLM Security admin

GPT-4 Jailbreak and Hacking via RabbitHole attack, Prompt injection, Content moderation bypass and Weaponizing AI

GPT-4 Jailbreak is what all the users have been waiting for since the GPT-4 release. We gave it within 1 hour. Subscribe for the latest AI Jailbreaks, Attacks, and Vulnerabilities Today marks the highly anticipated release of OpenAI’s GPT-4, the latest iteration of the groundbreaking natural language processing and  CV ...

ChatGPT hacking

todayDecember 6, 2022

  • 6956
close

Research + LLM Security admin

ChatGPT Security: eliminating humanity and hacking Dalle-2 using a trick from Jay and Silent Bob

ChatGPT Security note: The authors of this article show ChatGPT hacking techniques but have no intention to endorse or support any recommendations made by ChatGPT discussed in this post. The sole purpose of this article is to provide educational information and examples for research purposes to improve the security and ...

todayNovember 15, 2022

  • 1947
close

Review + Adversarial ML admin

MLSec 2022: BlackBox AI Hacking Competition Results And Review By Organizers

Recently, Adversa’s AI Red Team, a research division at Adversa AI, in collaboration with CUJO AI, Microsoft, and Robust Intelligence organized the annual Machine Learning Security Evasion Competition (MLSEC 2022). The contest announced at DEFCON AI Village has united practitioners in AI and cybersecurity fields in finding AI vulnerabilities and ...