admin

381 Results / Page 1 of 43

Abstract AI security background with glitch effects and shield symbol, representing trust and resilience in generative AI.

todayMay 5, 2025

  • 76
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 17 — AI Guardrails Under Pressure as Jailbreaking Techniques Advance

Enterprise use of generative AI is expanding, but so is the sophistication of attacks targeting these systems. New jailbreak methods are achieving nearly 100% success rates, even on well-aligned models like GPT-4 and Llama3, while recent research exposes vulnerabilities in memory, prompt interpretation, and cross-tool coordination protocols like MCP. At ...

todayApril 28, 2025

  • 121
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 16 — Can Your AI Agents Really Coordinate Safely?

As generative AI adoption accelerates, so do the security challenges that come with it. New research shows that even advanced large language models (LLMs) can be jailbroken with evolving techniques, while multi-agent AI systems introduce fresh risks at the communication and coordination layers. Cybercriminals are also scaling attacks using GenAI ...

todayApril 2, 2025

  • 38
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 12 – New NIST AI Security Efforts

NIST AI 100-2 E2025. Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations NIST, March, 2025 The National Institute of Standards and Technology (NIST) has released a report titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations” (NIST AI 100-2 E2025). The report categorizes AML ...

NIST FMF AI 100-2 2025

todayMarch 31, 2025

  • 212
close

Articles admin

NIST AI 100-2 E2025 Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

NIST’s New AML Taxonomy: Key Changes in AI Security Guidelines (2023 vs. 2025) In an ever-evolving landscape of AI threats and vulnerabilities, staying ahead means staying updated. The National Institute of Standards and Technology (NIST) recently published a crucial update to its cornerstone document, “Adversarial Machine Learning: A Taxonomy and ...

todayMarch 23, 2025

  • 36
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 11 – Combating Jailbreaking, Malware, and Exploits

3 tech advancements to be nervous about Fast Company, March 17, 2025 One of the top three tech advancements to be nervous about today is the fact that jailbreaking robots is becoming increasingly possible. This practice involves manipulating AI-driven robots to bypass their built-in safety systems, often by exploiting vulnerabilities ...

todayMarch 18, 2025

  • 29
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 10 – Lessons from Siri Delays

Apple may have delayed the Siri upgrade for fear of jailbreaks GSMArena, March 10, 2025 Apple’s decision to delay its planned AI enhancements for Siri highlights the growing security concerns surrounding artificial intelligence, particularly the risk of “jailbreaking” through prompt injections. These attacks involve manipulating AI models into performing unintended ...