Secure AI Weekly

249 Results / Page 1 of 28

todayAugust 25, 2025

  • 200
  • 1
close

Secure AI Weekly ADMIN

Towards Secure AI Week 33 — Lenovo Chatbot Breach, PROMISQROUTE in GPT-5, NIST AI Security Overlays, EU AI Priorities, and Grok Privacy Leak

From Lenovo’s chatbot breach exposing session cookies to our own research uncovering PROMISQROUTE, a novel vulnerability in GPT-5’s routing mechanism found by Adversa AI Research team, this week showed how fragile AI security architectures still are. The risks are not limited to technical flaws: regulators are struggling too, as NIST ...

todayAugust 18, 2025

  • 214
close

Secure AI Weekly ADMIN

Towards Secure AI Week 32 — NIST Control Overlays, OWASP Landscape, LLM Trustworthiness Scores, and GPT-5 Jailbreak

From GPT-5 jailbreaks leaking harmful instructions within hours of release to new benchmarks exposing systemic weaknesses in major models, this week highlighted how fragile LLM Security remains. Despite new training methods, Jailbreak LLM attacks like context poisoning and obfuscation continue to bypass guardrails. As enterprises experiment with tool-using and multi-agent ...

todayAugust 11, 2025

  • 149
close

Secure AI Weekly ADMIN

Towards Secure AI Week 31 — Gemini Smart Home Hijack, LLM Slopsquatting, GPT-5 Jailbreak, OWASP Landscape, and GenAI Data Exposure

From poisoned calendar invites that let attackers open smart shutters to hallucinated software packages seeding malware into supply chains, this week’s AI security stories highlight just how many doors are left open in generative and agentic systems. Research at Black Hat USA showed that even seemingly routine integrations — like ...

todayAugust 4, 2025

  • 47
close

Secure AI Weekly ADMIN

Towards Secure AI Week 30 — Amazon Q Breach, LegalPwn Prompt Injection, and IdentityMesh in Agentic AI

From compromised coding assistants to identity-collapsing agent chains, this week’s AI security incidents reveal just how fragile the foundations of generative and agentic systems remain. The Amazon Q supply chain breach showed how a single malicious prompt could wipe infrastructure at scale—if not for a lucky syntax error. Meanwhile, researchers ...

todayJuly 14, 2025

  • 56
close

Secure AI Weekly ADMIN

Towards Secure AI Week 27 — McDonald’s AI Hiring Chatbot Incident Exposes SaaS Gaps as CSA Launches AI Security Standards

From fast food to frameworks, this week highlights the widening gap in AI security maturity. A massive breach at McDonald’s AI hiring platform shows how basic security oversights—like hardcoded credentials and IDOR flaws—can still devastate modern AI infrastructure. With over 64 million applicant records exposed via a third-party chatbot, the ...

todayJuly 7, 2025

  • 40
close

Secure AI Weekly ADMIN

Towards Secure AI Week 26 — Standardizing AI Defenses While MCP Misconfigurations Expose Core Infrastructure

AI systems are scaling fast — and so are the risks. This month’s digest highlights urgent developments shaping the future of GenAI security. From SANS and OWASP’s landmark partnership to define standard AI security controls, to Accenture’s warning that most enterprises lack foundational AI defenses, the message is clear: security ...

todayJune 30, 2025

  • 84
close

Secure AI Weekly ADMIN

Towards Secure AI Week 25 — AI Joins the Attack Chain But Industry Response Still Lags Behind

This week’s digest shows how fast the threat landscape around LLMs is shifting. Researchers have now found malware samples embedding prompt injection attacks directly into their payloads—marking the first real-world attempt to evade AI-powered analysis tools. Meanwhile, cybercriminals are offering jailbroken versions of Grok and Mixtral for phishing and malware ...

todayJune 23, 2025

  • 192
close

Secure AI Weekly ADMIN

Towards Secure AI Week 24 — From Hallucinated Help Desks to Hijacked LLMs: This Is the New AI Threat Surface

This week’s digest exposes how attackers exploit AI agents through prompt injection, jailbreak public APIs to revive malicious models, and compromise developer tools at the supply chain level. Multiple incidents—like the Asana data leak and the Atlassian exploit—stem from insecure use of the Model Context Protocol (MCP), a rising standard ...

todayJune 16, 2025

  • 168
close

Secure AI Weekly ADMIN

Towards Secure AI Week 23 — From Zero-Click Exploits to Policy-Backed Guardrails: Where AI Security Stands Now

As AI systems transition from passive tools to autonomous agents, the risks surrounding them evolve just as fast. This week’s digest reveals how attackers are already exploiting agentic AI, how regulators are racing to keep up, and how industry is responding with new benchmarks and standards. From Microsoft’s EchoLeak zero-click ...