
Towards Secure AI Week 17 — AI Guardrails Under Pressure as Jailbreaking Techniques Advance
Enterprise use of generative AI is expanding, but so is the sophistication of attacks targeting these systems. New jailbreak methods are achieving nearly 100% success rates, even on well-aligned models like GPT-4 and Llama3, while recent research exposes vulnerabilities in memory, prompt interpretation, and cross-tool coordination protocols like MCP. At ...