Secure AI Weekly

235 Results / Page 5 of 27

todayJuly 9, 2024

  • 84
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 27 – New Jailbreak, Prompt Injection and Prompt Leaking Incidents

Generative AI is new attack vector endangering enterprises, says CrowdStrike CTO ZDNet, June 30, 2024 According to CrowdStrike CTO Elia Zaitsev, the technology’s capability to generate human-like text can be exploited by cybercriminals in various ways. One of the significant concerns is the use of prompt injection attacks, where malicious ...

todayJuly 2, 2024

  • 64
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 26 – Prompt Injections and Jailbreaks at scale

Red Teaming is Crucial for Successful AI Integration and Application AIThority, June 25, 2024 Generative AI, despite its potential, is susceptible to errors, biases, and poor judgment, necessitating rigorous testing methods. Traditional testing falls short due to AI’s unpredictable nature, leading to the adoption of advanced strategies like red teaming. ...

todayJune 24, 2024

  • 70
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 25 – GenAI attack course and more

Mental Model for Generative AI Risk and Security Framework Hackernoon, June 19, 2024 A comprehensive framework based on established security principles—such as data protection, identity and access management, and threat monitoring—can help mitigate privacy risks. Organizations must evaluate whether to use managed AI services or build custom models, each presenting ...

todayJune 18, 2024

  • 101
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 24 – Strategies for Open Source, Poisoning, and GenAI

Open-source security in AI HelpNet Security, June 12, 2024 The 2024 AI Index report highlights a surge in AI-related patents, showing the industry’s focus on innovation. Despite this, companies frequently neglect specialized AI security protocols, heightening the risk of exploitation and misuse. Open-source components, not originally designed for AI, introduce ...

todayJune 12, 2024

  • 103
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 23 – Email Prompt Injections

EmailGPT Exposed to Prompt Injection Attacks Infosecurity Magazine, June 7, 2024 A recent vulnerability in EmailGPT, a widely used AI-powered email assistant, has raised significant concerns regarding the security and safety of AI technologies. Identified as CVE-2024-5184, this prompt injection flaw enables malicious actors to manipulate the AI’s logic, potentially ...

todayMay 14, 2024

  • 72
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 19 – CSA and Elastic Guidance for AI Security

Elastic Security Labs Releases Guidance to Avoid LLM Risks and Abuses Datanami, May 8, 2024 Elastic Security Labs has recognized the pressing need to address the vulnerabilities posed by Language Model Manipulation (LLM) and has released comprehensive guidance to mitigate these risks effectively. As AI technologies become increasingly sophisticated, the ...