Articles

14 Results / Page 1 of 2

todayMarch 20, 2023

  • 10876
  • 1
close

Articles admin

AI Red Teaming LLM for Safe and Secure AI: GPT4 Jailbreak ZOO

AI Red Teaming LLM Models is a very important step. Lets look at the various methods to evaluate GPT-4 for Jailbreaks. Since the release of GPT-4 and our first article on various GPT-4 jailbreak methods, a slew of innovative techniques has emerged. Let’s dive into these cutting-edge methods and explore ...

todayMarch 15, 2023

  • 25436
close

Articles admin

GPT-4 Jailbreak and Hacking via RabbitHole attack, Prompt injection, Content moderation bypass and Weaponizing AI

GPT-4 Jailbreak is what all the users have been waiting for since the GPT-4 release. We gave it within 1 hour. Subscribe for the latest AI Jailbreaks, Attacks, and Vulnerabilities Today marks the highly anticipated release of OpenAI’s GPT-4, the latest iteration of the groundbreaking natural language processing and  CV ...

todayNovember 15, 2022

  • 1436
close

Articles + Trusted AI Blog admin

MLSec 2022: BlackBox AI Hacking Competition Results And Review By Organizers

Recently, Adversa’s AI Red Team, a research division at Adversa AI, in collaboration with CUJO AI, Microsoft, and Robust Intelligence organized the annual Machine Learning Security Evasion Competition (MLSEC 2022). The contest announced at DEFCON AI Village has united practitioners in AI and cybersecurity fields in finding AI vulnerabilities and ...

todayAugust 18, 2022

  • 1647
  • 1
close

Articles admin

Secure AI in the Military: How Mistakes Turn Deadly

The first country to fully harness artificial intelligence (AI) for military applications will be the one that leads the world in AI warfare – according to Jack Shanahan, at least. Shanahan is an Air Force Lt. Gen. and director of the United States’ Joint Artificial Intelligence Center, and he’s one ...