Towards Trusted AI Week 17 – Prompt injections VS autonomous agents

Secure AI Weekly + Trusted AI Blog admin todayApril 27, 2023 149

Background
share close

Is this the year that AI breaks into our bank accounts?

Dazed Digital, April 19, 2023

According to Alex Polyakov, the founder and CEO of Adversa AI, the security of AI-based biometric systems is already falling apart.

Artificial intelligence (AI) has become an integral part of our daily lives, from using facial recognition to access bank accounts and smartphones to voice authentication for identity confirmation. However, the security of these biometric-based systems is increasingly being threatened by advancements in AI. Co-founders of the Centre for Humane Technology, Tristan Harris and Aza Raskin, have warned that 2023 will be a pivotal year for AI disruption. They claim that content-based verification is likely to be one of the first victims of real cyberattacks, as it grants direct access to critical information.

AI-based cyberattacks have already been witnessed in the real world. Deepfake videos have been used to pass “liveness checks” that are used by banks, dating apps, and crypto companies, as well as tricking government systems that rely on facial recognition technology. Deepfake audio has been used to hack bank accounts and authorize millions of dollars of fraudulent payments. Biometric identifiers like fingerprints and eye patterns are also vulnerable to AI-generated fake patterns that might deceive security checks. Such attacks are currently being demonstrated in labs, and it is only a matter of time before they become prevalent.

Voice cloning scams are also on the rise, utilizing AI technology that can simulate someone’s voice based on just a few seconds of audio. This technology powers accessibility tools, voice assistants, and various forms of entertainment, but in the wrong hands, it can be dangerous. Criminals can use it to deceive people, leading to fraudulent activities like ransom demands or scamming individuals out of their money. The lack of a robust regulatory framework to control the use of AI only adds to the concern. 

ChatGPT may hinder the cybersecurity industry

VentureBeat, April 22, 2023

Artificial intelligence (AI) chatbots like ChatGPT have taken the world by storm, offering impressive accuracy and human-like responses that have won them millions of users in record time. However, while this technology has numerous cybersecurity use cases, it is also being viewed with suspicion by experts, who are concerned about the ease with which it can be used to create convincing phishing messages. With Americans having lost $40 billion to these scams in 2022 alone, it is a serious issue that needs to be addressed.

On the one hand, AI-powered chatbots like ChatGPT offer many benefits to cybersecurity companies. By automating repetitive tasks and generating reports, they can help analysts to sift through data more quickly and accurately, saving time and resources. Chatbots can also detect patterns and search within specific parameters, making it easier to identify and address security threats. However, the lack of security measures in current chatbot technology means that it is also easy for hackers to infiltrate software and develop sophisticated phishing tools, leading to concerns about job security and the need for adequate safety measures.

The future of AI chatbots like ChatGPT looks very bright, with major investments being made by companies like Microsoft to expand their knowledge and abilities. However, this technology also poses a number of challenges, including the need to combat nefarious users who seek to use it to cause harm in new and innovative ways. Ultimately, the success of AI chatbots in the cybersecurity industry will depend on the ability of developers and researchers to address these issues and work collaboratively to create a safer online environment for everyone.

How prompt injection can hijack autonomous AI agents like Auto-GPT

VentureBeat, April 20, 2023

The increasing use of autonomous AI agents like Auto-GPT and large language models (LLMs) has brought with it new security vulnerabilities. A recent demonstration by Simon Willison, creator of the open-source tool datasette, showed how prompt injection attacks could be conducted on autonomous AI agents that integrate with LLMs like GPT-4. Prompt injection attacks can manipulate the LLM to perform arbitrary actions, which could potentially become dangerous when applied to AI agents with the ability to trigger additional tools via API requests or execute code in a shell. Security experts believe that this introduces significant risk to companies that incorporate autonomous agents into their technology stack, and the potential for data exfiltration through Auto-GPT is a possibility.

As a result, organizations need to tread carefully when adopting LLM-connected autonomous agents. The risks of prompt injection attacks need to be carefully considered, and security best practices and risk-mitigation strategies need to be developed and implemented. Dan Shiebler, head of machine learning at cybersecurity vendor Abnormal Security, warns that prompt injection attacks are an extremely serious risk, serious enough to prevent many companies from incorporating this technology into their own stack.

While the risks of prompt injection attacks should not be taken lightly, it is important not to panic unnecessarily. Joseph Thacker, an AppOmni senior offensive security engineer, believes that while these attacks are worth talking about, they will not be the end of the world. Vulnerabilities will exist, but they can be mitigated. It is important for organizations to stay informed and vigilant, keeping security at the forefront of any new technology adoption.

As AI risk grows, Anthropic calls for NIST funding boost: ‘This is the year to be ambitious’

VentureBeat, April 21, 2023

As the risks associated with AI innovation continue to increase, Anthropic, an AI research company, has called for $15 million in funding for the National Institute of Standards and Technology (NIST) to support the agency’s efforts in AI measurement and standards. This call to action comes as part of a broader effort to maintain American leadership in the development of critical technologies, which received bipartisan support during a recent budget hearing about 2024 funding of the U.S. Department of Commerce.

Anthropic’s proposal for NIST funding is a simple and “shovel-ready” idea that is available to policymakers today. According to Anthropic co-founder Jack Clark, this is the year to be ambitious about funding because policymakers are waking up to AI and proposing ideas. The call for funding is an opportunity to bring government, academia, and industry together to improve measurement and evaluation systems, which will promote safe technological innovation and ensure more people are at the table when it comes to AI conversations.

Anthropic’s call for NIST funding highlights the importance of creating a government infrastructure that gives government and academia a way to train systems at the frontier and build and understand them themselves. This will help mitigate the downsides of “industrial capture,” where a minority of actors in the private sector drive the development of costly AI systems. By improving measurement and evaluation systems and creating gold-standard public datasets, policymakers can prioritize society-wide conversations about the risks of AI innovation and promote safe technological innovation.

In summary, Anthropic’s call for NIST funding to promote safe AI innovation is a timely and important proposal. By bringing government, academia, and industry together to improve measurement and evaluation systems, policymakers can prioritize society-wide conversations about the risks of AI innovation and ensure safe technological innovation.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    If you want more news and valuable insights on a weekly and even daily basis, follow our LinkedIn to join a community of other experts discussing the latest news. 

    Written by: admin

    Rate it
    Previous post