Trusted AI Blog

274 Results / Page 1 of 31

todayMarch 25, 2024

  • 40
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 12 – New AI Security Framework

Introducing the Databricks AI Security Framework (DASF) Data Bricks, March 21, 2024 This framework has been meticulously crafted to foster collaboration across various domains including business, IT, data, AI, and security, offering a comprehensive approach towards fortifying AI systems against potential threats. Through demystifying AI and ML concepts, cataloging AI ...

todayMarch 21, 2024

  • 35
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 11 – GenAI security policies

Hackers can read private AI-assistant chats even though they’re encrypted ArsTechnica, March 14, 2024 Despite efforts to encrypt communications, a newly developed attack has demonstrated the ability to decode AI assistant responses with alarming accuracy. Exploiting a side channel present in major AI systems, excluding Google Gemini, this attack compromises ...

todayMarch 5, 2024

  • 57
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 9 –  BEAST Jailbreak and AI Security Predictions 2024

Cyber Insights 2024: Artificial Intelligence Security Week, February 26, 2024 In the ever-evolving landscape of AI within cybersecurity, 2024 brings forth profound insights from Mr. Alex Polyakov, CEO and co-founder of Adversa AI. Polyakov highlights the expanding threat landscape, citing instances such as the jailbreak of Chevrolet’s Chatbot and data ...

todayFebruary 26, 2024

  • 94
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 8 –  FS-ISAC AI Risk Guides

Google Gemini “Diverse” Prompt Injection Know Your Meme, February 22, 2024 This scrutiny emphasizes the necessity for a steadfast commitment to Quality and Robustness testing before releasing AI in production. The crux of the controversy emerged on February 9th, 2024, when a Reddit user expressed dissatisfaction with Gemini’s seeming inability ...

todayFebruary 22, 2024

  • 95
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 7 –  New book in GenAI Security

DARPA and IBM are ensuring that anyone can protect their AI systems from hackers IBM, February 7, 2024 Collaborating with DARPA’s Guaranteeing AI Robustness Against Deception (GARD) project, IBM has been at the forefront of addressing these challenges, particularly through the development of the Adversarial Robustness Toolbox (ART). Beyond military ...

todayFebruary 8, 2024

  • 55
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 5 –  Threat of Prompt Injection Looms Large

How to detect poisoned data in machine learning datasets VentureBeat, February 4, 2024 Data poisoning in machine learning datasets poses a significant threat, allowing attackers to manipulate model behavior intentionally. Proactive detection efforts are crucial to safeguarding against this threat. Data poisoning involves maliciously tampering with datasets to mislead machine ...

todayFebruary 6, 2024

  • 183
close

Trusted AI Blog + LLM Security admin

LLM Security Digest: TOP Security Platforms, Incidents, Developer Guides, Threat Models and Hacking Games   

Welcome to the latest edition of our LLM Security Digest!  We explore the dynamic landscape of LLM Security Platforms, innovative real-world incidents, and cutting-edge research that shape the field of LLM security. From adversarial AI attacks to the challenges of securing foundational models, we bring you insights, debates, and practical ...