Trusted AI Blog

475 Results / Page 11 of 53

Background

todayApril 9, 2025

  • 49
close

Secure AI Weekly admin

Towards Secure AI Week 13 – Don’t Trust AI Blindly

Critical AI Security Guidelines v1.1 – Now Available SANS The SANS Institute has released the Critical AI Security Guidelines v1.0, offering a structured framework for protecting AI technologies across their lifecycle. The guidelines stress that securing AI is not just a technical issue but a strategic imperative—one that requires tight ...

todayApril 2, 2025

  • 54
close

Secure AI Weekly admin

Towards Secure AI Week 12 – New NIST AI Security Efforts

NIST AI 100-2 E2025. Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations NIST, March, 2025 The National Institute of Standards and Technology (NIST) has released a report titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations” (NIST AI 100-2 E2025). The report categorizes AML ...

todayApril 2, 2025

  • 48
close

MCP Security + MCP Security Digest ADMIN

MCP Security Digest — April 2025

MCP Security is a top concern for anyone building Agentic AI systems. The Model Context Protocol (MCP) connects tools, agents, and actions. It plays a role similar to TCP/IP—but for autonomous workflows. If MCP is compromised, the entire agent stack is at risk. Attackers can inject prompts, hijack tools, and ...

NIST FMF AI 100-2 2025

todayMarch 31, 2025

  • 351
close

Review + Adversarial ML admin

NIST AI 100-2 E2025 Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

NIST’s New AML Taxonomy: Key Changes in AI Security Guidelines (2023 vs. 2025) In an ever-evolving landscape of AI threats and vulnerabilities, staying ahead means staying updated. The National Institute of Standards and Technology (NIST) recently published a crucial update to its cornerstone document, “Adversarial Machine Learning: A Taxonomy and ...

todayMarch 18, 2025

  • 41
close

Secure AI Weekly admin

Towards Secure AI Week 10 – Lessons from Siri Delays

Apple may have delayed the Siri upgrade for fear of jailbreaks GSMArena, March 10, 2025 Apple’s decision to delay its planned AI enhancements for Siri highlights the growing security concerns surrounding artificial intelligence, particularly the risk of “jailbreaking” through prompt injections. These attacks involve manipulating AI models into performing unintended ...

todayMarch 11, 2025

  • 73
close

Secure AI Weekly admin

Towards Secure AI Week 9 – Exploiting AI Weaknesses

Researchers Jailbreak 17 Popular LLM Models to Reveal Sensitive Data GBHackers, March 7, 2025 Researchers from Palo Alto Networks’ Threat Research Center have discovered that 17 popular generative AI (GenAI) applications are vulnerable to jailbreaking techniques, allowing users to bypass safety protocols. By using both single-turn and multi-turn strategies, attackers ...