Trusted AI Blog

475 Results / Page 15 of 53

Background

todayAugust 13, 2024

  • 333
close

LLM Security Digest admin

LLM Security Top Digest: From LLM vulns to ever-first job in AI security incident response

Explore the most critical vulnerabilities and emerging threats affecting Large Language Models (LLMs) and Generative AI technologies. As always, we provide useful guides and techniques to protect your AI systems.   Subscribe for the latest LLM Security news: Jailbreaks, Attacks, CISO guides, VC Reviews and more   Top LLM Security ...

todayAugust 7, 2024

  • 75
close

Secure AI Weekly admin

Towards Secure AI Week 31 – New AI Security Standards and Laws

Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile NIST, July 26, 2024 The National Institute of Standards and Technology (NIST) has released the “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile,” a companion to the AI Risk Management Framework (AI RMF 1.0). This framework is designed to help ...

todayJuly 16, 2024

  • 87
close

Secure AI Weekly admin

Towards Secure AI Week 28 – The Hidden Dangers of LLMs

LLMs in Crosshairs: Why Security Can’t Wait Venture Highway, July 9, 2024 The swift integration of large language models (LLMs) into various organizational processes has highlighted significant security concerns, akin to the early vulnerabilities seen with the rise of the internet. LLMs, while capable of generating human-like text and handling ...

todayJuly 9, 2024

  • 110
close

Secure AI Weekly admin

Towards Secure AI Week 27 – New Jailbreak, Prompt Injection and Prompt Leaking Incidents

Generative AI is new attack vector endangering enterprises, says CrowdStrike CTO ZDNet, June 30, 2024 According to CrowdStrike CTO Elia Zaitsev, the technology’s capability to generate human-like text can be exploited by cybercriminals in various ways. One of the significant concerns is the use of prompt injection attacks, where malicious ...