Secure AI Weekly

220 Results / Page 6 of 25

todayJanuary 24, 2024

  • 122
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 3 – DPD AI Chatbot incident

A CISO’s perspective on how to understand and address AI risk SCMedia, January 16, 2024 The adoption of AI in enterprises introduces significant risks that span technical, reputational, regulatory, and operational dimensions. From supply chain vulnerabilities to the potential theft of sensitive data, the stakes are high, demanding a proactive ...

todayJanuary 22, 2024

  • 118
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 2 – Unpacking NIST’s AI Framework

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations NIST, January, 2024 In its comprehensive report on Trustworthy and Responsible Artificial Intelligence, the National Institute of Standards and Technology (NIST) presents a detailed classification and vocabulary for understanding adversarial machine learning (AML). This report, centered around the security ...

todayDecember 27, 2023

  • 85
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 51 – The Hidden Cybersecurity Battles

Data poisoning: how artists are sabotaging AI to take revenge on image generators The Conversation, December 17, 2023 Consider this scenario: You’re preparing a presentation and require an image of a balloon. Opting for a text-to-image generator like Midjourney or DALL-E, you input “red balloon against a blue sky.” Unexpectedly, ...

todayDecember 18, 2023

  • 98
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 50 – Cloud Security Alliance towards Secure AI

CSA Official Press Release CSA, December 12, 2023 The recent unveiling of the AI Safety Initiative by the Cloud Security Alliance (CSA) marks a pivotal moment in the journey towards ensuring the security and ethical deployment of artificial intelligence. This initiative, in collaboration with tech giants such as Amazon, Anthropic, ...

todayDecember 14, 2023

  • 97
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 49 – Multiple Loopholes in LLM… Again

LLMs Open to Manipulation Using Doctored Images, Audio Dark Reading, December 6, 2023 The rapid advancement of artificial intelligence (AI), especially in large language models (LLMs) like ChatGPT, has brought forward pressing concerns about their security and safety. A recent study highlights a new type of cyber threat, where attackers ...

todayDecember 6, 2023

  • 115
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 48 – Multiple OpenAI Security Flaws

OpenAI’s Custom Chatbots Are Leaking Their Secrets Wired, November 29, 2023 The rise of customizable AI chatbots, like OpenAI’s GPTs, has introduced a new era of convenience in creating personalized AI tools. However, this advancement brings with it significant security challenges, as highlighted by Alex Polyakov, CEO of Adversa AI. ...

todayDecember 1, 2023

  • 85
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 47 – UK Guides for secure AI development

AIs can trick each other into doing things they aren’t supposed to New Scientist, November 24, 2023 Recent developments in artificial intelligence (AI) have raised significant security concerns. Notably, AI models, which are generally programmed to reject harmful or illegal requests, have demonstrated a concerning ability to persuade each other ...

todayNovember 22, 2023

  • 89
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 46 – GPT’s Security Issues and OpenAI Drama

Top VC Firms Sign Voluntary Commitments for Startups to Build AI Responsibly Bloomberg, November 14, 2023 In a landmark initiative for the AI industry, over 35 leading venture capital firms, such as General Catalyst, Felicis Ventures, Bain Capital, IVP, Insight Partners, and Lux Capital, have committed to promoting responsible AI ...

todayNovember 15, 2023

  • 107
close

Secure AI Weekly + Trusted AI Blog admin

Towards Secure AI Week 45 – LLM hacking LLM and new Google SAIF

Google’s Secure AI Framework  (SAIF) Google Google’s Secure AI Framework (SAIF) is a blueprint for securing AI and machine learning (ML) models, designed to be secure-by-default. It addresses concerns that are top of mind for security professionals, such as risk management, security, and privacy, ensuring that AI systems are safely ...