Trusted AI Blog

336 Results / Page 1 of 38

todayApril 30, 2025

  • 72
close

Articles ADMIN

Securing Agentic AI: Key Threats, Attacks, and Defenses

Artificial Intelligence has entered a new phase. No longer limited to generating text or analyzing data, AI systems can now take initiative. Meet Agentic AI—autonomous systems capable of making decisions, interacting with APIs, browsing the web, updating spreadsheets, sending emails, and executing code. This new breed of AI is rapidly ...

todayApril 28, 2025

  • 85
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 16 — Can Your AI Agents Really Coordinate Safely?

As generative AI adoption accelerates, so do the security challenges that come with it. New research shows that even advanced large language models (LLMs) can be jailbroken with evolving techniques, while multi-agent AI systems introduce fresh risks at the communication and coordination layers. Cybercriminals are also scaling attacks using GenAI ...

todayApril 2, 2025

  • 33
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 12 – New NIST AI Security Efforts

NIST AI 100-2 E2025. Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations NIST, March, 2025 The National Institute of Standards and Technology (NIST) has released a report titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations” (NIST AI 100-2 E2025). The report categorizes AML ...

NIST FMF AI 100-2 2025

todayMarch 31, 2025

  • 141
close

Articles admin

NIST AI 100-2 E2025 Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

NIST’s New AML Taxonomy: Key Changes in AI Security Guidelines (2023 vs. 2025) In an ever-evolving landscape of AI threats and vulnerabilities, staying ahead means staying updated. The National Institute of Standards and Technology (NIST) recently published a crucial update to its cornerstone document, “Adversarial Machine Learning: A Taxonomy and ...

todayMarch 23, 2025

  • 32
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 11 – Combating Jailbreaking, Malware, and Exploits

3 tech advancements to be nervous about Fast Company, March 17, 2025 One of the top three tech advancements to be nervous about today is the fact that jailbreaking robots is becoming increasingly possible. This practice involves manipulating AI-driven robots to bypass their built-in safety systems, often by exploiting vulnerabilities ...

todayMarch 18, 2025

  • 26
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 10 – Lessons from Siri Delays

Apple may have delayed the Siri upgrade for fear of jailbreaks GSMArena, March 10, 2025 Apple’s decision to delay its planned AI enhancements for Siri highlights the growing security concerns surrounding artificial intelligence, particularly the risk of “jailbreaking” through prompt injections. These attacks involve manipulating AI models into performing unintended ...