Digests

31 Results / Page 2 of 4

Background

todayApril 9, 2025

  • 37
close

Secure AI Weekly + Digests admin

Towards Secure AI Week 13 – Don’t Trust AI Blindly

Critical AI Security Guidelines v1.1 – Now Available SANS The SANS Institute has released the Critical AI Security Guidelines v1.0, offering a structured framework for protecting AI technologies across their lifecycle. The guidelines stress that securing AI is not just a technical issue but a strategic imperative—one that requires tight ...

todayApril 2, 2025

  • 21
close

MCP Security + Digests ADMIN

MCP Security Digest — April 2025

MCP Security is a top concern for anyone building Agentic AI systems. The Model Context Protocol (MCP) connects tools, agents, and actions. It plays a role similar to TCP/IP—but for autonomous workflows. If MCP is compromised, the entire agent stack is at risk. Attackers can inject prompts, hijack tools, and ...

todayMarch 23, 2025

  • 41
close

Secure AI Weekly + Digests admin

Towards Secure AI Week 11 – Combating Jailbreaking, Malware, and Exploits

3 tech advancements to be nervous about Fast Company, March 17, 2025 One of the top three tech advancements to be nervous about today is the fact that jailbreaking robots is becoming increasingly possible. This practice involves manipulating AI-driven robots to bypass their built-in safety systems, often by exploiting vulnerabilities ...

todayMarch 18, 2025

  • 31
close

Secure AI Weekly + Digests admin

Towards Secure AI Week 10 – Lessons from Siri Delays

Apple may have delayed the Siri upgrade for fear of jailbreaks GSMArena, March 10, 2025 Apple’s decision to delay its planned AI enhancements for Siri highlights the growing security concerns surrounding artificial intelligence, particularly the risk of “jailbreaking” through prompt injections. These attacks involve manipulating AI models into performing unintended ...

todayMarch 11, 2025

  • 55
close

Secure AI Weekly + Digests admin

Towards Secure AI Week 9 – Exploiting AI Weaknesses

Researchers Jailbreak 17 Popular LLM Models to Reveal Sensitive Data GBHackers, March 7, 2025 Researchers from Palo Alto Networks’ Threat Research Center have discovered that 17 popular generative AI (GenAI) applications are vulnerable to jailbreaking techniques, allowing users to bypass safety protocols. By using both single-turn and multi-turn strategies, attackers ...

todayMarch 4, 2025

  • 98
close

Secure AI Weekly + Digests admin

Towards Secure AI Week 8 – Massive AI security breach

MITRE Releases OCCULT Framework to Address AI Security Challenges GBHackers, February 26, 2025 MITRE has launched the Offensive Cyber Capability Unified LLM Testing (OCCULT) framework, a structured approach designed to assess the potential security risks of large language models (LLMs) in cyberattacks. This initiative focuses on evaluating AI systems’ ability ...