admin

364 Results / Page 1 of 41

todayFebruary 5, 2025

  • 7
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 4 – DeepSeek’s AI Security Failures

Wiz Research Uncovers Exposed DeepSeek Database Leaking Sensitive Information, Including Chat History Wiz, January 29, 2025 A recent security lapse in AI infrastructure has underscored the critical need for stronger protections in artificial intelligence systems. Wiz Research uncovered an unprotected ClickHouse database belonging to DeepSeek, a Chinese AI startup known ...

Articles admin / January 31, 2025

DeepSeek Jailbreak’s

  Subscribe for the latest LLM Security and AI Red Teaming news:  Jailbreaks Attacks, Defenses, Frameworks, CISO guides, VC Reviews, Policies and more Deepseek Jailbreak’s In this article, we will demonstrate  how  DeepSeek respond to different jailbreak techniques. Our initial study on AI Red Teaming different LLM Models using various aproaches ...

todayJanuary 31, 2025

  • 17
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 3 – OWASP Guidelines and Risk Reduction Strategies

AI Security Among Top Priorities for Cybersecurity Leaders Channel Futures, January 24, 2025 A recent report from Info-Tech Research Group outlines key security priorities necessary to mitigate emerging risks while harnessing AI’s potential for strengthening cybersecurity defenses. These priorities include establishing AI governance frameworks to manage security and privacy risks, ...

todayDecember 4, 2024

  • 48
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 48 – Biggest AI Security Bug Bounty

Artificial Intelligence Vulnerability Scoring System (AIVSS) GitHub The AI Vulnerability Scoring System (AIVSS) has been proposed as a framework designed to evaluate vulnerabilities in AI systems comprehensively. Unlike static models, AIVSS incorporates dynamic metrics tailored to AI, including model robustness, data sensitivity, ethical impact, and adaptability, alongside traditional security considerations. ...

todayNovember 18, 2024

  • 57
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 46 – Hacking LLM Robots

It’s Surprisingly Easy to Jailbreak LLM-Driven Robots Researchers induced bots to ignore their safeguards without exception IEEE Spectrum, November 11, 2024 The rapid integration of large language models (LLMs) like ChatGPT into robotics has revolutionized how robots interact with humans, offering capabilities such as voice-activated commands and task execution based ...

todayNovember 12, 2024

  • 81
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 45 – AI Safety Through Testing, Legislation, and Talent Building

Microsoft’s Yonatan Zunger on Red Teaming Generative AI The Cyber Wire, November 6, 2024 In a recent Microsoft Threat Intelligence Podcast episode, host Sherrod DeGrippo speaks with Yonatan Zunger, Corporate Vice President of AI Safety and Security at Microsoft, to explore the critical importance of securing AI systems. The conversation ...

todayNovember 6, 2024

  • 82
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 44 – From Open-Source AI Risks to National Policies

Researchers Uncover Vulnerabilities in Open-Source AI and ML Models The Hacker News, October 29, 2024 Recent disclosures have highlighted over thirty security vulnerabilities within various open-source artificial intelligence (AI) and machine learning (ML) models, some of which could allow for remote code execution and unauthorized data access. Key flaws have ...