Trusted AI Blog

317 Results / Page 2 of 36

todaySeptember 17, 2024

  • 40
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 37 – Global AI Security Frameworks Dubai, China

Governance framework promotes AI security China Daily, September 11, 2024 A new governance framework aimed at enhancing the security and safety of AI was introduced during China Cybersecurity Week in Guangzhou, Guangdong province. Announced by the National Technical Committee 260 on Cybersecurity of the Standardization Administration of China, the framework ...

todaySeptember 11, 2024

  • 158
close

Trusted AI Blog + GenAI Security admin

GenAI Security Top Digest: Slack and Apple Prompt Injections, threats of Microsoft Copilot, image attacks

This is the first-of-its-kind GenAI Security Top digest, originated from our world-first LLM Security Digest, providing an essential summary of the most critical vulnerabilities and threats to all Generative AI technologies from LLV and VLM to GenAI Copilots and GenAI infrastructure, along with expert strategies to protect your systems, ensuring ...

todaySeptember 9, 2024

  • 91
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 36 – AI Security Guides from WDTA

Top five strategies from Meta’s CyberSecEval 3 to combat weaponized LLMs Venture Beat, September 3, 2024 Meta’s CyberSecEval 3 framework highlights the urgent need for comprehensive security measures as AI technologies, particularly large language models (LLMs), become more prevalent. The framework suggests five key strategies for safeguarding AI systems. These ...

todaySeptember 3, 2024

  • 69
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 35 – Latest GenAI hacking incidents: Slack, Copilot, GPT’s etc..

Hundreds of LLM Servers Expose Corporate, Health & Other Online Data DarkReading, August 28, 2024 Recent discoveries have highlighted a troubling issue: hundreds of LLM servers are inadvertently exposing sensitive corporate, healthcare, and personal data online due to misconfigurations and insufficient security measures.  These servers, often left unprotected by adequate ...

todayAugust 28, 2024

  • 40
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 34 – Securing LLM by CSA

Securing LLM Backed Systems: Essential Authorization Practices Cloud Security Alliance, August 13, 2024 The widespread use of LLMs, while offering significant benefits, also introduces substantial security risks, particularly concerning unauthorized data access and potential model exploitation. To address these concerns, the Cloud Security Alliance (CSA) has provided essential guidelines for ...

todayAugust 21, 2024

  • 49
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 33 – LLM Copilot Hacks and the Path to Safer System

Jailbreaking LLMs and abusing Copilot to “live off the land” of M365 The Stack, August 9, 2024 As artificial intelligence (AI) systems like large language models (LLMs) and AI-driven tools such as GitHub’s Copilot become more embedded in our digital environments, they also introduce significant security risks. Recent research has ...

todayAugust 13, 2024

  • 192
close

Trusted AI Blog + LLM Security admin

LLM Security Top Digest: From LLM vulns to ever-first job in AI security incident response

Explore the most critical vulnerabilities and emerging threats affecting Large Language Models (LLMs) and Generative AI technologies. As always, we provide useful guides and techniques to protect your AI systems.   Subscribe for the latest LLM Security news: Jailbreaks, Attacks, CISO guides, VC Reviews and more   Top LLM Security ...