
Towards Secure AI Week 42 – New Jailbreaks and Incidents
LLMs are easier to jailbreak using keywords from marginalized groups, study finds The Decoder, October 20, 2024 A recent study highlights unintended vulnerabilities in the safety protocols of large language models (LLMs), revealing that well-meaning ethical measures can introduce security gaps. Researchers found that the ease with which these models ...