admin

327 Results / Page 3 of 37

todayFebruary 8, 2024

  • 66
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 5 –  Threat of Prompt Injection Looms Large

How to detect poisoned data in machine learning datasets VentureBeat, February 4, 2024 Data poisoning in machine learning datasets poses a significant threat, allowing attackers to manipulate model behavior intentionally. Proactive detection efforts are crucial to safeguarding against this threat. Data poisoning involves maliciously tampering with datasets to mislead machine ...

todayFebruary 6, 2024

  • 221
close

Trusted AI Blog + LLM Security admin

LLM Security Digest: TOP Security Platforms, Incidents, Developer Guides, Threat Models and Hacking Games   

Welcome to the latest edition of our LLM Security Digest!  We explore the dynamic landscape of LLM Security Platforms, innovative real-world incidents, and cutting-edge research that shape the field of LLM security. From adversarial AI attacks to the challenges of securing foundational models, we bring you insights, debates, and practical ...

todayJanuary 31, 2024

  • 133
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 4 – Thousands ChatGPT jailbreaks for sale

Top 4 LLM threats to the enterprise CSO Online, January 22, 2024 The intersection of natural language prompts and training sources poses unique threats, including prompt injection, prompt extraction, phishing schemes, and the poisoning of models. Traditional security tools find it challenging to keep pace with these dynamic risks, necessitating ...

todayJanuary 25, 2024

  • 119
close

Trusted AI Blog + LLM Security admin

LLM Security Digest: Jailbreaks, Red Teaming, CISO Guides, Incidents and Jobs

Here’s the top LLM security publications collected in one place for you. This digest provides insights into various aspects of Large Language Model (LLM) security. It covers a range of topics, from checklists for LLM Security and incidents involving vulnerabilities in chatbots to real-world attacks and initiatives by the Cloud ...

todayJanuary 24, 2024

  • 91
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 3 – DPD AI Chatbot incident

A CISO’s perspective on how to understand and address AI risk SCMedia, January 16, 2024 The adoption of AI in enterprises introduces significant risks that span technical, reputational, regulatory, and operational dimensions. From supply chain vulnerabilities to the potential theft of sensitive data, the stakes are high, demanding a proactive ...

todayJanuary 22, 2024

  • 85
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 2 – Unpacking NIST’s AI Framework

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations NIST, January, 2024 In its comprehensive report on Trustworthy and Responsible Artificial Intelligence, the National Institute of Standards and Technology (NIST) presents a detailed classification and vocabulary for understanding adversarial machine learning (AML). This report, centered around the security ...

todayDecember 27, 2023

  • 67
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 51 – The Hidden Cybersecurity Battles

Data poisoning: how artists are sabotaging AI to take revenge on image generators The Conversation, December 17, 2023 Consider this scenario: You’re preparing a presentation and require an image of a balloon. Opting for a text-to-image generator like Midjourney or DALL-E, you input “red balloon against a blue sky.” Unexpectedly, ...

todayDecember 18, 2023

  • 71
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 50 – Cloud Security Alliance towards Secure AI

CSA Official Press Release CSA, December 12, 2023 The recent unveiling of the AI Safety Initiative by the Cloud Security Alliance (CSA) marks a pivotal moment in the journey towards ensuring the security and ethical deployment of artificial intelligence. This initiative, in collaboration with tech giants such as Amazon, Anthropic, ...