Trusted AI Blog

408 Results / Page 13 of 46

Background

todayNovember 16, 2023

  • 392
close

Digests + Adversarial ML admin

Secure AI Research Papers: Jailbreaks, AutoDAN, Attacks on VLM and more

Researchers explore the vulnerabilities that lie within the complex web of algorithms, and the need for a shield that can protect against unseen but not unfelt threats.   These papers published in October 2023 collectively study AI’s vulnerability, from the simplicity of human-crafted deceptions to the complexity of multilingual and visual ...

todayNovember 15, 2023

  • 5217
  • 2
close

Articles + LLM Security admin

What is Prompt Leaking, API Leaking, Documents Leaking in LLM Red Teaming

What is AI Prompt Leaking? Adversa AI Research team revealed a number of new LLM Vulnerabilities, including those resulted in Prompt Leaking that affect almost any Custom GPT’s right now.  Subscribe for the latest LLM Security news: Prompt Leaking, Jailbreaks, Attacks, CISO guides, VC Reviews, and more Step one. Approximate Prompt ...

todayNovember 8, 2023

  • 211
close

Digests + LLM Security admin

LLM Security Digest: Best October’s Activities And Prompt Engineering Tricks

This digest of October 2023 encapsulates the most influential findings and discussions on LLM Security and a bit of Prompt Engineering. Subscribe for the latest LLM Security news: Jailbreaks, Attacks, CISO guides, VC Reviews and more   LLM Security  Best practical LLM Attacks: Multi-modal prompt injection image attacks against GPT-4V ...

todayNovember 6, 2023

  • 83
close

Secure AI Weekly + Digests admin

Towards Trusted AI Week 45 – US executive order and UK Bletchley Declaration on AI

FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence The White House, October 30, 2023 President Joe Biden has issued a decisive Executive Order to position the United States as a leader in the realm of artificial intelligence (AI), ensuring the nation captures the technology’s ...

todayNovember 2, 2023

  • 88
close

Secure AI Weekly + Digests admin

Towards Trusted AI Week 44 – Takeaways from Securing AI Summit

16 Takeaways from the 2023 Securing AI Summit in San Francisco BGV, October 24, 2023 The rapid integration of Generative AI (GenAI) technologies like ChatGPT into various industries is a double-edged sword. While these advancements promise significant productivity boosts, they also bring forth heightened concerns regarding cybersecurity. Just as we ...

todayNovember 1, 2023

  • 115
close

Articles + LLM Security admin

White House Executive Order On Safe And Secure AI: A Need For External AI Red Teaming

Why is it important? In recognition of AI’s transformative potential and the associated challenges, President Biden has taken the decisive step of issuing an Executive Order geared toward ensuring AI evolves safely, securely, and in the best interest of all Americans. Given the expansive impacts of AI, it’s pivotal that ...

todayOctober 30, 2023

  • 915
close

Company Updates + Industry Awards + Press Releases admin

Adversa AI LLM Security solution is mentioned in the Gartner GenAI Security research

Adversa AI has been cited as a Representative Vendor in Gartner’s industry report focusing on GenAI Security “Emerging Tech: Top 4 Security Risks of GenAI”. This is a significant acknowledgment of Adversa AI’s advancements in LLM security. This reference underlines Adversa AI’s position as a crucial player in the burgeoning ...

todayOctober 24, 2023

  • 105
close

Secure AI Weekly + Digests admin

Towards Trusted AI Week 43 – AI TRISM is in the Gartner top Tech Trends 2024

Enterprises struggle to address generative AI’s security implications AINews, October 18, 2023 A recent study by the cloud-native network detection firm, ExtraHop, reveals a pressing concern in today’s digital age: organizations are finding it challenging to navigate the security implications of their employees using generative AI. The report, titled “The ...