LLM Security

13 Results / Page 2 of 2

todayMarch 15, 2023

  • 31153
close

Research + LLM Security admin

GPT-4 Jailbreak and Hacking via RabbitHole attack, Prompt injection, Content moderation bypass and Weaponizing AI

GPT-4 Jailbreak is what all the users have been waiting for since the GPT-4 release. We gave it within 1 hour. Subscribe for the latest AI Jailbreaks, Attacks, and Vulnerabilities Today marks the highly anticipated release of OpenAI’s GPT-4, the latest iteration of the groundbreaking natural language processing and  CV ...

ChatGPT hacking

todayDecember 6, 2022

  • 6956
close

Research + LLM Security admin

ChatGPT Security: eliminating humanity and hacking Dalle-2 using a trick from Jay and Silent Bob

ChatGPT Security note: The authors of this article show ChatGPT hacking techniques but have no intention to endorse or support any recommendations made by ChatGPT discussed in this post. The sole purpose of this article is to provide educational information and examples for research purposes to improve the security and ...