Towards Trusted AI Week 26 – Securing the AI Chatbot Revolution

Secure AI Weekly + Trusted AI Blog admin todayJune 29, 2023 94

Background
share close

ChatGPT tricked into generating Windows 11 keys by pretending to be grandma reading a story

TweakTown, June 25, 2023

The advent of artificial intelligence (AI), particularly large language models like OpenAI’s ChatGPT, has opened up an array of possibilities and, concurrently, a Pandora’s box of security and safety issues. A notable case was when a Twitter user, Sid, manipulated ChatGPT into generating Windows 10 Pro keys by posing an unusual request for the chatbot to imitate his late grandmother recounting these keys as bedtime stories. This instance, deemed illegal by OpenAI, spotlights the critical need for sturdy security frameworks. The situation is tinged with irony, as Microsoft, the company whose software keys were involved, has invested over $10 billion in OpenAI for ChatGPT’s development, inadvertently creating a tool that could cause them financial harm.

The key incident not only raises alarms for OpenAI but also places the broader AI industry under scrutiny. Sid replicated the same tactic with Google’s AI chatbot, Bard, and successfully procured Windows keys, signifying that security lapses are prevalent across AI platforms. An even bigger concern is the source of these keys. The ability of an AI to autonomously generate keys poses monumental threats, as this could potentially extend beyond software to gift cards and other codes, impacting businesses across sectors.

Given these revelations, the security and safety of AI systems warrant immediate attention. It is crucial for AI developers, stakeholders, and regulatory bodies to collaborate in devising comprehensive safeguards and ethical guidelines to curb the misuse of AI technologies. This necessitates ongoing assessment and fortification of security protocols, which not only safeguard companies’ intellectual property and financial assets but also forestall the malevolent exploitation of AI technologies.

OWASP Top 10 for LLMs: Can AI risk be tamed?

Security Boulevard, June 20, 2023

As the deployment of Large Language Models (LLMs) such as OpenAI’s ChatGPT and AI-based development tools like Microsoft’s GitHub Copilot gains momentum, the Open Web Application Security Project (OWASP) has expedited the preparation of a Top 10 list focused on LLM projects. The rapid development and use of LLMs necessitate a quick response to ensure security, and OWASP’s initiative has seen a surprising surge of interest from the security and developer community. Steve Wilson, the project leader and Chief Product Officer at Contrast Security, expressed astonishment at the level of engagement and stated the need for the group to be aggressive in releasing the first version of the document to provide guidance, as there is a lack of it in the industry.

The objective of this list is to equip software teams with an understanding of the foremost threats posed by LLMs and strategies to mitigate them. Chris Romeo, a cybersecurity expert, emphasized the urgency to address security issues derived from AI-based software by staying ahead of the curve. OWASP’s Top 10 list is projected to serve as a bedrock, not just enumerating the vulnerabilities, but also offering tangible advice for developers on secure coding practices with LLMs. The first draft of the list covers areas including prompt injections, data leakage, insufficient access controls, and training data poisoning, among others. Some of these vulnerabilities are consistent with those seen in web applications and APIs, while others, such as overreliance on LLM-generated content, are novel.

Despite the enthusiasm and expertise dedicated to creating this list, there are inherent challenges. LLM technologies are constantly evolving, and their complexity makes identifying vulnerabilities arduous. Moreover, the lack of standardization in LLM development compounds the difficulty in formulating a comprehensive list. Keeping the list updated is another hurdle, given the ongoing improvements in LLMs. Additionally, generating consensus among the experts, considering the diversity in their backgrounds, is a significant challenge. Wilson indicated that a more methodical approach might be adopted for future iterations of the list, including real-world data on breaches and vulnerabilities. The advent of LLMs is double-edged, and initiatives such as OWASP’s Top 10 list are vital in ensuring the security and safe deployment of these powerful tools.

Compromised ChatGPT accounts are for sale on dark web

CyberNews, June 20, 2023

The dark web is currently teeming with over 100,000 ChatGPT credentials up for grabs. As the workforce becomes increasingly dependent on Artificial Intelligence (AI) tools like ChatGPT to enhance productivity, the revelation of these compromised credentials potentially provides malefactors with a goldmine of information.

A staggering 15% of employees have been found to feed company data into ChatGPT, with one-fourth of this information being classified or sensitive. This data often includes internal business metrics, source codes, customer details, and personally identifiable information regulated by law. The enormity of such sensitive data being exposed has justifiably alarmed experts, as malevolent actors armed with this information could wreak havoc on both corporate and individual fronts – with ramifications ranging from tarnishing reputations to monumental financial downturns. A report by Group-IB, a cybersecurity firm, divulges how threat actors can effortlessly get their hands on such data. The research unearthed over 100,000 breached credentials in malware logs that are traded on the dark web’s clandestine marketplaces. These computers are typically compromised via phishing attacks, through which the malware pilfers saved credentials, banking information, crypto wallets, cookies, browsing history, and more.

Dmitry Shestakov, the Head of Threat Intelligence at Group-IB, warns of the risks involved when employees input classified communication or use ChatGPT to optimize proprietary code. The default setting on ChatGPT preserves all dialogues, which inadvertently results in a repository of delicate data. If this information falls into the wrong hands due to account breaches, the consequences can be detrimental. Realizing the imminent danger, companies like Samsung have prohibited the use of ChatGPT and similar AI tools. Nonetheless, the allure of ChatGPT remains irresistible for many; a recent Github survey revealed an overwhelming 92% of developers resort to AI to curb burnout and augment productivity. To fortify against the misuse of ChatGPT credentials, it is advised to regularly update passwords and employ two-factor authentication. Enabling 2FA ensures that an alert is sent to the user’s device in case of unauthorized attempts to access their ChatGPT account. The rapid adoption of AI in the workplace necessitates an equal urgency in implementing security measures to safeguard valuable data.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post