Trusted AI Blog

331 Results / Page 11 of 37

todayOctober 5, 2023

  • 193
close

Trusted AI Blog + LLM Security admin

LLM Security and Prompt Engineering: Best Events of September From LLM Hacking Games to Gartner Research

This digest encapsulates the most influential findings and discussions from the LLM Security with some of the most important prompt engineering highlights. Subscribe for the latest LLM Security news: Jailbreaks, Attacks, CISO guides, VC Reviews and more   LLM Security  Best LLM Hacking Game: Tensor Trust A riveting online game ...

todayOctober 5, 2023

  • 70
close

Secure AI Weekly + Trusted AI Blog admin

Towards Trusted AI Week 40 – Job of the Week: Head of Generative AI Security

Malicious Actors Exploiting AI Chatbot Jailbreaking Tips Security Boulevard, September 27, 2023 Recent developments in the world of AI have raised concerns about the security and safety of these advanced systems. Malicious actors have been collaborating to breach the ethical and safety boundaries placed around AI chatbots like ChatGPT. This ...

todaySeptember 27, 2023

  • 244
close

Secure AI Weekly + Trusted AI Blog admin

Towards Trusted AI Week 39 – Open AI Red Teaming & The rise of Secure AI Startups

OpenAI Red Teaming Network OpenAI, September 19, 2023 Finally, OpenAI launched the OpenAI Red Teaming Network, a pivotal initiative designed to bolster the safety and security of our AI models. This venture welcomes experts from a myriad of fields to collaborate, utilizing their diverse insights for the thorough evaluation and ...

todaySeptember 5, 2023

  • 85
close

Secure AI Weekly + Trusted AI Blog admin

Towards Trusted AI Week 36 – The Critical Quest for Secure and Reliable AI Systems

UK cybersecurity agency warns of chatbot ‘prompt injection’ attacks The Guardian, August 30, 2023 The United Kingdom’s National Cyber Security Centre (NCSC) has recently raised alarms about the escalating cybersecurity threats surrounding chatbots. These automated conversational agents, powered by large language models (LLMs) like OpenAI’s ChatGPT and Google’s Bard, are ...

todaySeptember 4, 2023

  • 165
close

Trusted AI Blog + LLM Security admin

LLM Security and Prompt Engineering Digest: Top August events, guides, incidents, VC reviews and research papers

Welcome to a brief exploration into the fascinating world of AI security—a realm where innovation and danger intertwine like DNA strands. Dive in to learn how red teaming tests AI vulnerabilities, what Google recommends for AI security, the unforeseen risks of AI in everyday applications, and academic approaches to the ...

todaySeptember 1, 2023

  • 136
close

Secure AI Weekly + Trusted AI Blog admin

Towards Trusted AI Week 35 – The Achilles’ Heel of AI

Tricks for making AI chatbots break rules are freely available online NewScientist, August 21, 2023 Artificial intelligence chatbots like ChatGPT have become essential tools for various online activities, but their security loopholes present an emerging concern. Manipulative text prompts, often referred to as “jailbreak prompts,” can mislead these AI systems ...