Trusted AI Blog

324 Results / Page 1 of 36

Grok 3 AI Red Teaming

todayFebruary 18, 2025

  • 8846
close

Articles admin

Grok 3 Jailbreak and AI red Teaming

  Subscribe for the latest LLM Security and AI Red Teaming news:  Jailbreaks Attacks, Defenses, Frameworks, CISO guides, VC Reviews, Policies and more Grok 3 Jailbreak and AI Red Teaming In this article, we will demonstrate  how Grok 3 respond to different hacking  techniques including Jailbreaks and Prompt leaking attacks. ...

todayFebruary 17, 2025

  • 22
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 6 – New AI Security Framework

Announcing the Databricks AI Security Framework 2.0 DataBricks, February 12, 2025 Databricks has unveiled the second edition of its AI Security Framework (DASF 2.0), a comprehensive guide designed to address the growing risks associated with AI deployments. The framework identifies 62 technical AI risks and introduces 64 mitigation controls, offering ...

todayFebruary 12, 2025

  • 25
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 5 – Worldwide AI safety report

World-leading AI cyber security standard to protect digital economy and deliver Plan for Change Gov UK, January 31, 2025 The UK government has unveiled a pioneering cybersecurity standard aimed at protecting artificial intelligence (AI) systems from cyber threats, reinforcing the security of the digital economy. This initiative seeks to ensure ...

todayFebruary 5, 2025

  • 48
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 4 – DeepSeek’s AI Security Failures

Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History Wiz, January 29, 2025 A recent security lapse in AI infrastructure has underscored the critical need for stronger protections in artificial intelligence systems. Wiz Research uncovered an unprotected ClickHouse database belonging to DeepSeek, a Chinese AI startup known ...

Articles admin / January 31, 2025

DeepSeek Jailbreak’s

  Subscribe for the latest LLM Security and AI Red Teaming news:  Jailbreaks Attacks, Defenses, Frameworks, CISO guides, VC Reviews, Policies and more Deepseek Jailbreak’s In this article, we will demonstrate  how  DeepSeek respond to different jailbreak techniques. Our initial study on AI Red Teaming different LLM Models using various aproaches ...

todayJanuary 31, 2025

  • 51
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 3 – OWASP Guidelines and Risk Reduction Strategies

AI Security Among Top Priorities for Cybersecurity Leaders Channel Futures, January 24, 2025 A recent report from Info-Tech Research Group outlines key security priorities necessary to mitigate emerging risks while harnessing AI’s potential for strengthening cybersecurity defenses. These priorities include establishing AI governance frameworks to manage security and privacy risks, ...

todayDecember 4, 2024

  • 53
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 48 – Biggest AI Security Bug Bounty

Artificial Intelligence Vulnerability Scoring System (AIVSS) GitHub The AI Vulnerability Scoring System (AIVSS) has been proposed as a framework designed to evaluate vulnerabilities in AI systems comprehensively. Unlike static models, AIVSS incorporates dynamic metrics tailored to AI, including model robustness, data sensitivity, ethical impact, and adaptability, alongside traditional security considerations. ...