Trusted AI Blog

475 Results / Page 16 of 53

Background

todayJuly 2, 2024

  • 70
close

Secure AI Weekly admin

Towards Secure AI Week 26 – Prompt Injections and Jailbreaks at scale

Red Teaming is Crucial for Successful AI Integration and Application AIThority, June 25, 2024 Generative AI, despite its potential, is susceptible to errors, biases, and poor judgment, necessitating rigorous testing methods. Traditional testing falls short due to AI’s unpredictable nature, leading to the adoption of advanced strategies like red teaming. ...

todayJune 24, 2024

  • 89
close

Secure AI Weekly admin

Towards Secure AI Week 25 – GenAI attack course and more

Mental Model for Generative AI Risk and Security Framework Hackernoon, June 19, 2024 A comprehensive framework based on established security principles—such as data protection, identity and access management, and threat monitoring—can help mitigate privacy risks. Organizations must evaluate whether to use managed AI services or build custom models, each presenting ...

todayJune 18, 2024

  • 113
close

Secure AI Weekly admin

Towards Secure AI Week 24 – Strategies for Open Source, Poisoning, and GenAI

Open-source security in AI HelpNet Security, June 12, 2024 The 2024 AI Index report highlights a surge in AI-related patents, showing the industry’s focus on innovation. Despite this, companies frequently neglect specialized AI security protocols, heightening the risk of exploitation and misuse. Open-source components, not originally designed for AI, introduce ...

todayJune 12, 2024

  • 121
close

Secure AI Weekly admin

Towards Secure AI Week 23 – Email Prompt Injections

EmailGPT Exposed to Prompt Injection Attacks Infosecurity Magazine, June 7, 2024 A recent vulnerability in EmailGPT, a widely used AI-powered email assistant, has raised significant concerns regarding the security and safety of AI technologies. Identified as CVE-2024-5184, this prompt injection flaw enables malicious actors to manipulate the AI’s logic, potentially ...

todayJune 3, 2024

  • 419
close

LLM Security Digest admin

LLM Security Top Digest: From security incidents and CISO guides to mitigations and EU AI Act

Today let us focus on the top security concerns surrounding Large Language Models. From cutting-edge security tools to emerging threats and mitigation strategies, this edition covers a wide range of topics crucial for understanding and safeguarding against LLM-related risks. Explore the latest research, incidents, and initiatives shaping the landscape of ...

todayJune 3, 2024

  • 104
close

Secure AI Weekly admin

Towards Secure AI Week 22 – NIST’s New ARIA Program

Japanese police arrest man after computer viruses created by misusing AI HITB SecNews, May 28, 2024 Japanese police have arrested a 25-year-old man from Kawasaki for allegedly using generative AI tools to create computer viruses. This rare and significant arrest brings to light the growing concerns regarding the misuse of ...

todayMay 27, 2024

  • 82
close

Secure AI Weekly admin

Towards Secure AI Week 21 – EU AI Act Revolution

World’s first major law for artificial intelligence gets final EU green light CNBC, May 21, 2024 The European Union has officially passed the world’s first comprehensive law regulating artificial intelligence, marking a significant milestone in the realm of AI safety and security. The newly approved Artificial Intelligence Act introduces a ...

todayMay 14, 2024

  • 77
close

Secure AI Weekly admin

Towards Secure AI Week 19 – CSA and Elastic Guidance for AI Security

Elastic Security Labs Releases Guidance to Avoid LLM Risks and Abuses Datanami, May 8, 2024 Elastic Security Labs has recognized the pressing need to address the vulnerabilities posed by Language Model Manipulation (LLM) and has released comprehensive guidance to mitigate these risks effectively. As AI technologies become increasingly sophisticated, the ...