Secure AI Weekly

235 Results / Page 4 of 27

todaySeptember 9, 2024

  • 112
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 36 – AI Security Guides from WDTA

Top five strategies from Meta’s CyberSecEval 3 to combat weaponized LLMs Venture Beat, September 3, 2024 Meta’s CyberSecEval 3 framework highlights the urgent need for comprehensive security measures as AI technologies, particularly large language models (LLMs), become more prevalent. The framework suggests five key strategies for safeguarding AI systems. These ...

todaySeptember 3, 2024

  • 95
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 35 – Latest GenAI hacking incidents: Slack, Copilot, GPT’s etc..

Hundreds of LLM Servers Expose Corporate, Health & Other Online Data DarkReading, August 28, 2024 Recent discoveries have highlighted a troubling issue: hundreds of LLM servers are inadvertently exposing sensitive corporate, healthcare, and personal data online due to misconfigurations and insufficient security measures.  These servers, often left unprotected by adequate ...

todayAugust 28, 2024

  • 61
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 34 – Securing LLM by CSA

Securing LLM Backed Systems: Essential Authorization Practices Cloud Security Alliance, August 13, 2024 The widespread use of LLMs, while offering significant benefits, also introduces substantial security risks, particularly concerning unauthorized data access and potential model exploitation. To address these concerns, the Cloud Security Alliance (CSA) has provided essential guidelines for ...

todayAugust 21, 2024

  • 73
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 33 – LLM Copilot Hacks and the Path to Safer System

Jailbreaking LLMs and abusing Copilot to “live off the land” of M365 The Stack, August 9, 2024 As artificial intelligence (AI) systems like large language models (LLMs) and AI-driven tools such as GitHub’s Copilot become more embedded in our digital environments, they also introduce significant security risks. Recent research has ...

todayAugust 7, 2024

  • 55
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 31 – New AI Security Standards and Laws

Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile NIST, July 26, 2024 The National Institute of Standards and Technology (NIST) has released the “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile,” a companion to the AI Risk Management Framework (AI RMF 1.0). This framework is designed to help ...

todayJuly 16, 2024

  • 73
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 28 – The Hidden Dangers of LLMs

LLMs in Crosshairs: Why Security Can’t Wait Venture Highway, July 9, 2024 The swift integration of large language models (LLMs) into various organizational processes has highlighted significant security concerns, akin to the early vulnerabilities seen with the rise of the internet. LLMs, while capable of generating human-like text and handling ...