Secure AI Weekly

224 Results / Page 6 of 25

todayFebruary 22, 2024

  • 139
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 7 –  New book in GenAI Security

DARPA and IBM are ensuring that anyone can protect their AI systems from hackers IBM, February 7, 2024 Collaborating with DARPA’s Guaranteeing AI Robustness Against Deception (GARD) project, IBM has been at the forefront of addressing these challenges, particularly through the development of the Adversarial Robustness Toolbox (ART). Beyond military ...

todayFebruary 8, 2024

  • 103
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 5 –  Threat of Prompt Injection Looms Large

How to detect poisoned data in machine learning datasets VentureBeat, February 4, 2024 Data poisoning in machine learning datasets poses a significant threat, allowing attackers to manipulate model behavior intentionally. Proactive detection efforts are crucial to safeguarding against this threat. Data poisoning involves maliciously tampering with datasets to mislead machine ...

todayJanuary 31, 2024

  • 161
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 4 – Thousands ChatGPT jailbreaks for sale

Top 4 LLM threats to the enterprise CSO Online, January 22, 2024 The intersection of natural language prompts and training sources poses unique threats, including prompt injection, prompt extraction, phishing schemes, and the poisoning of models. Traditional security tools find it challenging to keep pace with these dynamic risks, necessitating ...

todayJanuary 24, 2024

  • 131
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 3 – DPD AI Chatbot incident

A CISO’s perspective on how to understand and address AI risk SCMedia, January 16, 2024 The adoption of AI in enterprises introduces significant risks that span technical, reputational, regulatory, and operational dimensions. From supply chain vulnerabilities to the potential theft of sensitive data, the stakes are high, demanding a proactive ...

todayJanuary 22, 2024

  • 131
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 2 – Unpacking NIST’s AI Framework

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations NIST, January, 2024 In its comprehensive report on Trustworthy and Responsible Artificial Intelligence, the National Institute of Standards and Technology (NIST) presents a detailed classification and vocabulary for understanding adversarial machine learning (AML). This report, centered around the security ...

todayDecember 27, 2023

  • 93
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 51 – The Hidden Cybersecurity Battles

Data poisoning: how artists are sabotaging AI to take revenge on image generators The Conversation, December 17, 2023 Consider this scenario: You’re preparing a presentation and require an image of a balloon. Opting for a text-to-image generator like Midjourney or DALL-E, you input “red balloon against a blue sky.” Unexpectedly, ...

todayDecember 18, 2023

  • 109
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 50 – Cloud Security Alliance towards Secure AI

CSA Official Press Release CSA, December 12, 2023 The recent unveiling of the AI Safety Initiative by the Cloud Security Alliance (CSA) marks a pivotal moment in the journey towards ensuring the security and ethical deployment of artificial intelligence. This initiative, in collaboration with tech giants such as Amazon, Anthropic, ...

todayDecember 14, 2023

  • 105
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 49 – Multiple Loopholes in LLM… Again

LLMs Open to Manipulation Using Doctored Images, Audio Dark Reading, December 6, 2023 The rapid advancement of artificial intelligence (AI), especially in large language models (LLMs) like ChatGPT, has brought forward pressing concerns about their security and safety. A recent study highlights a new type of cyber threat, where attackers ...