Digests

360 Results / Page 9 of 40

Background

todaySeptember 11, 2024

  • 260
close

GenAI Security + GenAI Security Digest admin

GenAI Security Top Digest: Slack and Apple Prompt Injections, threats of Microsoft Copilot, image attacks

This is the first-of-its-kind GenAI Security Top digest, originated from our world-first LLM Security Digest, providing an essential summary of the most critical vulnerabilities and threats to all Generative AI technologies from LLV and VLM to GenAI Copilots and GenAI infrastructure, along with expert strategies to protect your systems, ensuring ...

todaySeptember 9, 2024

  • 141
close

Secure AI Weekly admin

Towards Secure AI Week 36 – AI Security Guides from WDTA

Top five strategies from Meta’s CyberSecEval 3 to combat weaponized LLMs Venture Beat, September 3, 2024 Meta’s CyberSecEval 3 framework highlights the urgent need for comprehensive security measures as AI technologies, particularly large language models (LLMs), become more prevalent. The framework suggests five key strategies for safeguarding AI systems. These ...

todaySeptember 3, 2024

  • 112
close

Secure AI Weekly admin

Towards Secure AI Week 35 – Latest GenAI hacking incidents: Slack, Copilot, GPT’s etc..

Hundreds of LLM Servers Expose Corporate, Health & Other Online Data DarkReading, August 28, 2024 Recent discoveries have highlighted a troubling issue: hundreds of LLM servers are inadvertently exposing sensitive corporate, healthcare, and personal data online due to misconfigurations and insufficient security measures.  These servers, often left unprotected by adequate ...

todayAugust 28, 2024

  • 77
close

Secure AI Weekly admin

Towards Secure AI Week 34 – Securing LLM by CSA

Securing LLM Backed Systems: Essential Authorization Practices Cloud Security Alliance, August 13, 2024 The widespread use of LLMs, while offering significant benefits, also introduces substantial security risks, particularly concerning unauthorized data access and potential model exploitation. To address these concerns, the Cloud Security Alliance (CSA) has provided essential guidelines for ...

todayAugust 13, 2024

  • 333
close

LLM Security Digest admin

LLM Security Top Digest: From LLM vulns to ever-first job in AI security incident response

Explore the most critical vulnerabilities and emerging threats affecting Large Language Models (LLMs) and Generative AI technologies. As always, we provide useful guides and techniques to protect your AI systems.   Subscribe for the latest LLM Security news: Jailbreaks, Attacks, CISO guides, VC Reviews and more   Top LLM Security ...

todayAugust 7, 2024

  • 75
close

Secure AI Weekly admin

Towards Secure AI Week 31 – New AI Security Standards and Laws

Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile NIST, July 26, 2024 The National Institute of Standards and Technology (NIST) has released the “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile,” a companion to the AI Risk Management Framework (AI RMF 1.0). This framework is designed to help ...