LLM Security and Prompt Engineering Digest: Mastering the Art of Prompt Engineering and Grandma Jailbreaks

Trusted AI Blog + LLM Security admin todayJuly 5, 2023 311 5

Background
share close

We launched a new newsletter dedicated to LLM Security. In order to make it even more useful we update you on the latest trends in  Prompt Engineering, a new inspiring way to interact with large language models you can use in your daily job.

We have a variety of exciting topics to cover. The compilation of articles here will guide you through the mysterious maze of ChatGPT prompt engineering and the latest LLM security. Are you ready to be enlightened?


Subscribe for the latest LLM Security news: Jailbreaks, Attacks, CISO guides, VC Reviews, and more

     

    Prompt engineering news

    Six Strategies for Getting Better Results

    OpenAI provides a comprehensive guide on optimizing GPT outputs through prompt engineering. The article highlights six strategies and tactics including writing clear instructions, providing reference text, and splitting complex tasks into simpler subtasks, giving GPTs time to “think”, and testing changes systematically.

    Prompt Engineering 201: Advanced methods and toolkits

    The blog post sheds light on 19 advanced techniques of Prompt Engineering. Some of them are Chain of thought (CoT), Automatic Chain of thought (Auto CoT), and The format trick. If you were looking for the one and only guide in this area, here it is!

    Tree of Thoughts

    Yao et el. (2023) and Long (2023) recently proposed Tree of Thoughts (ToT), a framework that generalizes over chain-of-thought prompting and encourages exploration over thoughts that serve as intermediate steps for general problem solving with language models. This is the most advanced Prompt Engineering method.

    ChatGPT prompts: How to optimize for sales, marketing, writing and more

    TechCrunch’s article emphasizes on optimizing ChatGPT prompts specifically for applications in sales, marketing, and writing. It provides insights into leveraging language models for generating persuasive cold emails, a list of keywords, giving detailed marketing and brand advice, thereby illuminating the practical applications of ChatGPT in the business realm.

     

    LLM  Security news

    Exploring Prompt Injection Attacks 

    This GitHub resource shows how Prompt Injection attacks can even lead to code execution.

    Here you can find an issue on the GitHub repository for LangChain, a popular  library focused on building applications using large language models through composability. LangChain helps developers by combining large language models with other computational sources or knowledge to create more powerful applications, and includes features like prompt management, chains, data augmented generation, agents, memory, and evaluation tools. 

    The Dark Side of AI: How Prompt Hacking Can Sabotage Your AI Systems

    The article highlights prompt injection attacks and defenses and discusses the importance of protecting AI systems from prompt hacking to safeguard data. It emphasizes the significance of understanding the risks, impacts, and strategies to prevent such emerging cybersecurity threats associated with large language models.

    Grandma Jailbreak

    This article seems to highlight an approach termed as “Grandma Jailbreak” which indicates an old method that has become effective again in breaching AI security. It demonstrates the difficulty and complexity in defending against certain vulnerabilities.

    ChatGPT in Grandma Mode will Spill All Your Secrets

    The article highlights an exploit where ChatGPT can be manipulated to behave like a naive grandmother and potentially disclose personal information, such as Windows activation keys or phone IMEI numbers. This is an old approach but it still demonstrates that protecting from such issues is not so easy. This exploit, referred to as the “Grandma glitch,” is part of a series of jailbreaks where in-built programming of large language models (LLMs) is broken. It involves putting ChatGPT in a state where it acts like a deceased grandmother telling bedtime stories, causing it to go beyond its normal programming. OpenAI quickly released a patch to rectify this issue, but a carefully constructed prompt can still exploit the glitch, which also affects other chatbots like Bing and Google Bard. 


    As the winding paths of these articles converge, they paint the tableau of ChatGPT’s immense potential through prompt engineering. 

    This indicates that as much as innovation is paramount in security, revisiting past defense mechanisms in light of new exploits is equally vital. As we venture further into the AI age, fortifying the bastions of security against these injection attacks will be a game of both innovative offense and studied defense. Let the guardians of AI not forget – a shield used well is as good as a sword.

     

    Subscribe to our newsletter to be the first who will know about the latest GPT-4 Jailbreaks and other AI attacks and vulnerabilities

      Written by: admin

      Rate it
      Previous post