Trusted AI Blog

324 Results / Page 5 of 36

todayJune 3, 2024

  • 283
close

Trusted AI Blog + LLM Security admin

LLM Security Top Digest: From security incidents and CISO guides to mitigations and EU AI Act

Today let us focus on the top security concerns surrounding Large Language Models. From cutting-edge security tools to emerging threats and mitigation strategies, this edition covers a wide range of topics crucial for understanding and safeguarding against LLM-related risks. Explore the latest research, incidents, and initiatives shaping the landscape of ...

todayMay 14, 2024

  • 70
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 19 – CSA and Elastic Guidance for AI Security

Elastic Security Labs Releases Guidance to Avoid LLM Risks and Abuses Datanami, May 8, 2024 Elastic Security Labs has recognized the pressing need to address the vulnerabilities posed by Language Model Manipulation (LLM) and has released comprehensive guidance to mitigate these risks effectively. As AI technologies become increasingly sophisticated, the ...

todayApril 22, 2024

  • 81
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 16 – NSA Guidelines for Secure AI Systems

Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems. NSA, April 15, 2024 The National Security Agency (NSA) has unveiled a comprehensive set of recommendations aimed at bolstering the security of AI systems. Released under the title “Strategies for Implementing AI Systems Securely: Effective Measures for ...

todayApril 15, 2024

  • 119
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 15 – New book on GenAI Security and more

Generative AI Security. Theories and Practices Springer, April, 2024 This new book on GenAI security dives into the critical theories and practical approaches necessary to safeguard AI systems, providing actionable insights and essential resources for navigating the complex cybersecurity landscape. It covers strategies and best practices for securing GenAI systems, ...