Secure AI Weekly

190 Results / Page 2 of 22

todayMarch 5, 2024

  • 75
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 9 –  BEAST Jailbreak and AI Security Predictions 2024

Cyber Insights 2024: Artificial Intelligence Security Week, February 26, 2024 In the ever-evolving landscape of AI within cybersecurity, 2024 brings forth profound insights from Mr. Alex Polyakov, CEO and co-founder of Adversa AI. Polyakov highlights the expanding threat landscape, citing instances such as the jailbreak of Chevrolet’s Chatbot and data ...

todayFebruary 26, 2024

  • 109
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 8 –  FS-ISAC AI Risk Guides

Google Gemini “Diverse” Prompt Injection Know Your Meme, February 22, 2024 This scrutiny emphasizes the necessity for a steadfast commitment to Quality and Robustness testing before releasing AI in production. The crux of the controversy emerged on February 9th, 2024, when a Reddit user expressed dissatisfaction with Gemini’s seeming inability ...

todayFebruary 22, 2024

  • 103
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 7 –  New book in GenAI Security

DARPA and IBM are ensuring that anyone can protect their AI systems from hackers IBM, February 7, 2024 Collaborating with DARPA’s Guaranteeing AI Robustness Against Deception (GARD) project, IBM has been at the forefront of addressing these challenges, particularly through the development of the Adversarial Robustness Toolbox (ART). Beyond military ...

todayFebruary 8, 2024

  • 66
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 5 –  Threat of Prompt Injection Looms Large

How to detect poisoned data in machine learning datasets VentureBeat, February 4, 2024 Data poisoning in machine learning datasets poses a significant threat, allowing attackers to manipulate model behavior intentionally. Proactive detection efforts are crucial to safeguarding against this threat. Data poisoning involves maliciously tampering with datasets to mislead machine ...

todayJanuary 31, 2024

  • 133
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 4 – Thousands ChatGPT jailbreaks for sale

Top 4 LLM threats to the enterprise CSO Online, January 22, 2024 The intersection of natural language prompts and training sources poses unique threats, including prompt injection, prompt extraction, phishing schemes, and the poisoning of models. Traditional security tools find it challenging to keep pace with these dynamic risks, necessitating ...

todayJanuary 24, 2024

  • 91
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 3 – DPD AI Chatbot incident

A CISO’s perspective on how to understand and address AI risk SCMedia, January 16, 2024 The adoption of AI in enterprises introduces significant risks that span technical, reputational, regulatory, and operational dimensions. From supply chain vulnerabilities to the potential theft of sensitive data, the stakes are high, demanding a proactive ...

todayJanuary 22, 2024

  • 85
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 2 – Unpacking NIST’s AI Framework

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations NIST, January, 2024 In its comprehensive report on Trustworthy and Responsible Artificial Intelligence, the National Institute of Standards and Technology (NIST) presents a detailed classification and vocabulary for understanding adversarial machine learning (AML). This report, centered around the security ...

todayDecember 27, 2023

  • 67
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 51 – The Hidden Cybersecurity Battles

Data poisoning: how artists are sabotaging AI to take revenge on image generators The Conversation, December 17, 2023 Consider this scenario: You’re preparing a presentation and require an image of a balloon. Opting for a text-to-image generator like Midjourney or DALL-E, you input “red balloon against a blue sky.” Unexpectedly, ...