admin

377 Results / Page 1 of 42

todayApril 2, 2025

  • 19
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 12 – New NIST AI Security Efforts

NIST AI 100-2 E2025. Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations NIST, March, 2025 The National Institute of Standards and Technology (NIST) has released a report titled “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations” (NIST AI 100-2 E2025). The report categorizes AML ...

NIST FMF AI 100-2 2025

todayMarch 31, 2025

  • 95
close

Articles admin

NIST AI 100-2 E2025 Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

NIST’s New AML Taxonomy: Key Changes in AI Security Guidelines (2023 vs. 2025) In an ever-evolving landscape of AI threats and vulnerabilities, staying ahead means staying updated. The National Institute of Standards and Technology (NIST) recently published a crucial update to its cornerstone document, “Adversarial Machine Learning: A Taxonomy and ...

todayMarch 23, 2025

  • 21
close

Trusted AI Blog + Secure AI Weekly admin

Towards Secure AI Week 11 – Combating Jailbreaking, Malware, and Exploits

3 tech advancements to be nervous about Fast Company, March 17, 2025 One of the top three tech advancements to be nervous about today is the fact that jailbreaking robots is becoming increasingly possible. This practice involves manipulating AI-driven robots to bypass their built-in safety systems, often by exploiting vulnerabilities ...

todayMarch 18, 2025

  • 20
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 10 – Lessons from Siri Delays

Apple may have delayed the Siri upgrade for fear of jailbreaks GSMArena, March 10, 2025 Apple’s decision to delay its planned AI enhancements for Siri highlights the growing security concerns surrounding artificial intelligence, particularly the risk of “jailbreaking” through prompt injections. These attacks involve manipulating AI models into performing unintended ...

todayMarch 11, 2025

  • 39
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 9 – Exploiting AI Weaknesses

Researchers Jailbreak 17 Popular LLM Models to Reveal Sensitive Data GBHackers, March 7, 2025 Researchers from Palo Alto Networks’ Threat Research Center have discovered that 17 popular generative AI (GenAI) applications are vulnerable to jailbreaking techniques, allowing users to bypass safety protocols. By using both single-turn and multi-turn strategies, attackers ...

todayMarch 4, 2025

  • 80
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 8 – Massive AI security breach

MITRE Releases OCCULT Framework to Address AI Security Challenges GBHackers, February 26, 2025 MITRE has launched the Offensive Cyber Capability Unified LLM Testing (OCCULT) framework, a structured approach designed to assess the potential security risks of large language models (LLMs) in cyberattacks. This initiative focuses on evaluating AI systems’ ability ...

todayFebruary 25, 2025

  • 114
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 7 – OWASP for Agentic AI and more

Agentic AI – Threats and Mitigations OWASP, February 17, 2025 Agentic AI, driven by large language models (LLMs) and generative AI, is advancing rapidly, offering new capabilities while introducing significant security risks. These autonomous systems can plan, adapt, and interact with external environments, making them powerful but also susceptible to ...

Grok 3 AI Red Teaming

todayFebruary 18, 2025

  • 13393
close

Articles admin

Grok 3 Jailbreak and AI red Teaming

  Subscribe for the latest LLM Security and AI Red Teaming news:  Jailbreaks Attacks, Defenses, Frameworks, CISO guides, VC Reviews, Policies and more Grok 3 Jailbreak and AI Red Teaming In this article, we will demonstrate  how Grok 3 respond to different hacking  techniques including Jailbreaks and Prompt leaking attacks. ...