Secure AI Weekly

240 Results / Page 8 of 27

todayFebruary 8, 2024

  • 117
close

Secure AI Weekly + Digests admin

Towards Secure AI Week 5 –  Threat of Prompt Injection Looms Large

How to detect poisoned data in machine learning datasets VentureBeat, February 4, 2024 Data poisoning in machine learning datasets poses a significant threat, allowing attackers to manipulate model behavior intentionally. Proactive detection efforts are crucial to safeguarding against this threat. Data poisoning involves maliciously tampering with datasets to mislead machine ...

todayJanuary 31, 2024

  • 173
close

Secure AI Weekly + Digests admin

Towards Secure AI Week 4 – Thousands ChatGPT jailbreaks for sale

Top 4 LLM threats to the enterprise CSO Online, January 22, 2024 The intersection of natural language prompts and training sources poses unique threats, including prompt injection, prompt extraction, phishing schemes, and the poisoning of models. Traditional security tools find it challenging to keep pace with these dynamic risks, necessitating ...

todayJanuary 24, 2024

  • 150
close

Secure AI Weekly + Digests admin

Towards Secure AI Week 3 – DPD AI Chatbot incident

A CISO’s perspective on how to understand and address AI risk SCMedia, January 16, 2024 The adoption of AI in enterprises introduces significant risks that span technical, reputational, regulatory, and operational dimensions. From supply chain vulnerabilities to the potential theft of sensitive data, the stakes are high, demanding a proactive ...

todayJanuary 22, 2024

  • 139
close

Secure AI Weekly + Digests admin

Towards Secure AI Week 2 – Unpacking NIST’s AI Framework

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations NIST, January, 2024 In its comprehensive report on Trustworthy and Responsible Artificial Intelligence, the National Institute of Standards and Technology (NIST) presents a detailed classification and vocabulary for understanding adversarial machine learning (AML). This report, centered around the security ...

todayDecember 1, 2023

  • 96
close

Secure AI Weekly + Digests admin

Towards Secure AI Week 47 – UK Guides for secure AI development

AIs can trick each other into doing things they aren’t supposed to New Scientist, November 24, 2023 Recent developments in artificial intelligence (AI) have raised significant security concerns. Notably, AI models, which are generally programmed to reject harmful or illegal requests, have demonstrated a concerning ability to persuade each other ...