Trusted AI Blog

475 Results / Page 20 of 53

Background

todayDecember 8, 2023

  • 688
close

LLM Security Digest admin

LLM Security Digest: Hacking LLM, Top LLM Attacks, VC Initiatives, LLM Incidents and Research papers in November 

This digest of November 2023 keeps the essential findings and discussions on LLM Security. From Hacking LLM using the intriguing ‘Prompt-visual injections’ to the complex challenges in securing systems like Google Bard, we cover the most crucial updates.   Subscribe for the latest LLM Security and Hacking LLM news: Jailbreaks, ...

todayDecember 1, 2023

  • 101
close

Secure AI Weekly + Digests admin

Towards Secure AI Week 47 – UK Guides for secure AI development

AIs can trick each other into doing things they aren’t supposed to New Scientist, November 24, 2023 Recent developments in artificial intelligence (AI) have raised significant security concerns. Notably, AI models, which are generally programmed to reject harmful or illegal requests, have demonstrated a concerning ability to persuade each other ...

todayNovember 16, 2023

  • 460
close

Adversarial ML Digest admin

Secure AI Research Papers: Jailbreaks, AutoDAN, Attacks on VLM and more

Researchers explore the vulnerabilities that lie within the complex web of algorithms, and the need for a shield that can protect against unseen but not unfelt threats.   These papers published in October 2023 collectively study AI’s vulnerability, from the simplicity of human-crafted deceptions to the complexity of multilingual and visual ...

todayNovember 15, 2023

  • 5342
  • 2
close

Research + LLM Security admin

What is Prompt Leaking, API Leaking, Documents Leaking in LLM Red Teaming

What is AI Prompt Leaking? Adversa AI Research team revealed a number of new LLM Vulnerabilities, including those resulted in Prompt Leaking that affect almost any Custom GPT’s right now.  Subscribe for the latest LLM Security news: Prompt Leaking, Jailbreaks, Attacks, CISO guides, VC Reviews, and more Step one. Approximate Prompt ...

todayNovember 8, 2023

  • 227
close

LLM Security Digest admin

LLM Security Digest: Best October’s Activities And Prompt Engineering Tricks

This digest of October 2023 encapsulates the most influential findings and discussions on LLM Security and a bit of Prompt Engineering. Subscribe for the latest LLM Security news: Jailbreaks, Attacks, CISO guides, VC Reviews and more   LLM Security  Best practical LLM Attacks: Multi-modal prompt injection image attacks against GPT-4V ...