Towards Secure AI Week 18 — LLM Jailbreaks Hit New Highs, AI Security Market Accelerates
As LLMs become embedded across enterprise applications, new red-teaming research shows jailbreak success rates surpassing 87% on models like GPT-4—even under safety-aligned settings. Techniques such as multi-turn roleplay, token-level obfuscation, and cross-model attacks continue to outpace current safeguards. Meanwhile, insider misuse and unfiltered GenAI outputs pose growing risks, prompting calls ...