Towards Secure AI Week 25 — AI Joins the Attack Chain But Industry Response Still Lags Behind
This week’s digest shows how fast the threat landscape around LLMs is shifting. Researchers have now found malware samples embedding prompt injection attacks directly into their payloads—marking the first real-world attempt to evade AI-powered analysis tools. Meanwhile, cybercriminals are offering jailbroken versions of Grok and Mixtral for phishing and malware ...