Trusted AI Blog

324 Results / Page 8 of 36

todayDecember 27, 2023

  • 93
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 51 – The Hidden Cybersecurity Battles

Data poisoning: how artists are sabotaging AI to take revenge on image generators The Conversation, December 17, 2023 Consider this scenario: You’re preparing a presentation and require an image of a balloon. Opting for a text-to-image generator like Midjourney or DALL-E, you input “red balloon against a blue sky.” Unexpectedly, ...

todayDecember 18, 2023

  • 109
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 50 – Cloud Security Alliance towards Secure AI

CSA Official Press Release CSA, December 12, 2023 The recent unveiling of the AI Safety Initiative by the Cloud Security Alliance (CSA) marks a pivotal moment in the journey towards ensuring the security and ethical deployment of artificial intelligence. This initiative, in collaboration with tech giants such as Amazon, Anthropic, ...

todayDecember 18, 2023

  • 243
close

Trusted AI Blog + Adversarial ML admin

Secure AI Research Papers: Breakthroughs and Break-ins in LLMs

A group of pioneering researchers have embarked on a quest to unveil the serious vulnerabilities and strengths of various AI applications from Classic Computer Vision to the latest LLM’s and VLM’s. Their latest works were collected in this digest for you covering jailbreak prompts, and transferable attacks, shining a light ...

todayDecember 14, 2023

  • 105
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 49 – Multiple Loopholes in LLM… Again

LLMs Open to Manipulation Using Doctored Images, Audio Dark Reading, December 6, 2023 The rapid advancement of artificial intelligence (AI), especially in large language models (LLMs) like ChatGPT, has brought forward pressing concerns about their security and safety. A recent study highlights a new type of cyber threat, where attackers ...

todayDecember 8, 2023

  • 549
close

Trusted AI Blog + LLM Security admin

LLM Security Digest: Hacking LLM, Top LLM Attacks, VC Initiatives, LLM Incidents and Research papers in November 

This digest of November 2023 keeps the essential findings and discussions on LLM Security. From Hacking LLM using the intriguing ‘Prompt-visual injections’ to the complex challenges in securing systems like Google Bard, we cover the most crucial updates.   Subscribe for the latest LLM Security and Hacking LLM news: Jailbreaks, ...

todayDecember 6, 2023

  • 120
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 48 – Multiple OpenAI Security Flaws

OpenAI’s Custom Chatbots Are Leaking Their Secrets Wired, November 29, 2023 The rise of customizable AI chatbots, like OpenAI’s GPTs, has introduced a new era of convenience in creating personalized AI tools. However, this advancement brings with it significant security challenges, as highlighted by Alex Polyakov, CEO of Adversa AI. ...

todayDecember 1, 2023

  • 91
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 47 – UK Guides for secure AI development

AIs can trick each other into doing things they aren’t supposed to New Scientist, November 24, 2023 Recent developments in artificial intelligence (AI) have raised significant security concerns. Notably, AI models, which are generally programmed to reject harmful or illegal requests, have demonstrated a concerning ability to persuade each other ...

todayNovember 22, 2023

  • 101
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 46 – GPT’s Security Issues and OpenAI Drama

Top VC Firms Sign Voluntary Commitments for Startups to Build AI Responsibly Bloomberg, November 14, 2023 In a landmark initiative for the AI industry, over 35 leading venture capital firms, such as General Catalyst, Felicis Ventures, Bain Capital, IVP, Insight Partners, and Lux Capital, have committed to promoting responsible AI ...

todayNovember 16, 2023

  • 323
close

Trusted AI Blog + Adversarial ML admin

Secure AI Research Papers: Jailbreaks, AutoDAN, Attacks on VLM and more

Researchers explore the vulnerabilities that lie within the complex web of algorithms, and the need for a shield that can protect against unseen but not unfelt threats.   These papers published in October 2023 collectively study AI’s vulnerability, from the simplicity of human-crafted deceptions to the complexity of multilingual and visual ...