Towards Trusted AI Week 19 – White House Initiatives on Stress-testing AI at Defcon

Secure AI Weekly + Trusted AI Blog admin todayMay 11, 2023 82

Background
share close

If you want more news and valuable insights on a weekly and even daily basis, follow our LinkedIn to join a community of other experts discussing the latest news. 

 

AI: Grappling With Trust, Risk and Security Management

Bank Info Security, May 4, 2023

Artificial Intelligence (AI) has become a pivotal aspect of our technological landscape, and as its influence grows, so does the need to prioritize its safety and security. Gartner, a prominent research firm, has introduced a new research category known as AI Trust Risk and Security Management (AI TRiSM). Avivah Litan, a distinguished analyst, has been assigned to this domain. In her recent discussion, Litan shed light on the evolving attack surface of AI, encompassing both AI-based attacks and those targeted against AI. Furthermore, she highlighted the upcoming trends that demand attention as we move into the second half of 2023.

One intriguing aspect emphasized by Litan is that the tools used to protect AI from hackers can also be harnessed to ensure fairness and impartiality within AI systems. By making slight modifications to filters and rules, AI can be rendered more trustworthy. This concept lies at the heart of the field of AI TRiSM, which aims to establish reliable AI, manage associated risks, and fortify systems against potential breaches by malicious actors. The fusion of security and fairness becomes a critical consideration for organizations adopting AI technologies.

During an insightful video interview with Information Security Media Group, Litan discussed several key aspects related to AI safety and security. She explored the current state of AI in the era of technologies like ChatGPT, highlighting its impact and implications. Litan also examined the expanding attack surface of AI, pointing out the rising vulnerabilities and threats. Additionally, she provided valuable guidance to application leaders on how to effectively prepare for the impact of AI and navigate the associated risks.

In conclusion, as AI continues to advance and integrate into various industries, ensuring its safety and security becomes an imperative task. The introduction of AI TRiSM by Gartner and the expertise of analysts like Avivah Litan offer valuable insights into managing risks, enhancing trustworthiness, and fortifying defenses against potential attacks. By prioritizing AI safety and security, organizations can harness the transformative potential of AI while upholding responsible and ethical practices in an increasingly interconnected world.

Ensuring artificial intelligence is safe and secure

TickerNews, May 5, 2023

In our ever-advancing technological era, an array of AI-based tools has emerged, simplifying various aspects of our lives. However, with the widespread adoption of these tools, concerns pertaining to safeguarding data privacy and protecting sensitive information have burgeoned. The exponential growth of AI technology has given rise to apprehensions surrounding the use of Generative AI systems, which can be readily accessed and manipulated by individuals with internet connectivity. This raises a pressing question: how can we establish a framework that ensures the safe and secure utilization of AI?

To shed light on this matter, we turn to Rob Nicholls, an esteemed academic from the University of New South Wales. Nicholls has extensively studied the intersection of technology and security and shares his insights on the subject. As our reliance on AI tools deepens, so does the imperative for robust safety measures. Privacy concerns must be addressed to maintain public trust and protect personal data from unauthorized access. By implementing stringent data protection protocols, we can mitigate the risks associated with AI systems and foster a secure environment for users. Additionally, the issue of security encompasses not only the protection of sensitive information but also the prevention of malicious manipulations of AI systems. Ensuring that access to AI tools is regulated and authenticated can significantly reduce the likelihood of unauthorized exploitation. By implementing strong authentication mechanisms and regularly updating security measures, the potential for misuse and manipulation can be minimized.

Collaboration between various stakeholders, including technology developers, policymakers, and researchers, is crucial in establishing a comprehensive framework for AI safety and security. Initiatives such as open dialogue, sharing best practices, and conducting thorough risk assessments can enhance the collective understanding of potential vulnerabilities and enable the development of effective countermeasures. As AI continues to permeate our daily lives, it is imperative that we proactively address the safety and security concerns associated with its use. By prioritizing privacy protection, implementing robust security measures, and fostering collaborative efforts, we can forge a path towards an AI-powered future that is both safe and secure for all.

New White House AI Initiatives Include AI Software-Vetting Event at DEF CON

DarkReading, May 5, 2023

The White House recently unveiled a series of measures aimed at promoting responsible innovation in AI while addressing its associated risks, particularly in cybersecurity. These actions reflect the administration’s commitment to ensuring the safety and security of AI technologies. One significant step involves organizing an event at DEF CON 31, where leading developers will subject their algorithms to rigorous public vetting. By emphasizing the cyber-risks of AI, the White House highlights the urgent need to address the trustworthiness of AI as a national security issue.

The Biden-Harris administration has emerged as a strong advocate for AI safety and security. Their efforts include the release of the “Blueprint for an AI Bill of Rights” in October and the development of a National Artificial Intelligence Research Resource. In addition, the National Institute of Standards and Technology (NIST) introduced the AI Risk Management Framework. These initiatives underscore the importance of prioritizing cybersecurity when evaluating the risks associated with AI.

To support the objective of AI cybersecurity, the government will fund seven new National AI Research Institutes, focused on various research areas, including AI cybersecurity. This investment will contribute to mitigating the cyber-risks posed by AI technologies. Another significant development is the DEF CON AI Village Event, where prominent AI companies have committed to a public evaluation of their AI systems. Independent testing by community partners and AI experts will ensure thorough assessments, free from corporate or governmental influence, and will shed light on algorithmic biases and cybersecurity vulnerabilities.

These government actions highlight the growing recognition of the critical importance of AI safety and security. By promoting responsible AI innovation and cybersecurity measures, the government aims to address the risks associated with AI and foster public trust in this transformative technology.

Add ‘prompt’ to the long list of injection attacks

Security magazine, May 5, 2023

Injection attacks have long been a serious concern, representing one of the most hazardous methods employed by cybercriminals. These attacks involve the insertion of unverified input into a program, leading to the manipulation of commands or queries and resulting in data loss, corruption, security breaches, information leakage, and loss of control. Adversaries can gain unauthorized access and manipulate database information, obtaining administrator privileges through successful injection attacks.

While injection attacks pose a significant threat, a new type of injection attack aimed specifically at AI systems has emerged. Generative AI bots, such as ChatGPT and Google Bard, are designed to provide human-like responses and narratives, following instructions when prompted with questions. However, studies reveal that these tools can be manipulated to perform malicious tasks, respond unfavorably, disclose sensitive information, or bypass safety filters. Known as prompt injection attacks, these attacks involve carefully tailored prompts that deceive AI systems and override their guardrails.

Prompt injection attacks share similarities with code injection attacks, with the primary distinction being that the prompt serves as the input in AI systems. The accessibility of prompt injection attacks is a cause for concern, as they can be executed by individuals without coding or technical skills, provided they are creative and clever in their malicious prompts. This low barrier to entry may result in prompt injection attacks surpassing standard injection attacks in frequency. Safeguarding AI systems against prompt injection attacks requires a multi-faceted approach, including comprehensive employee training to recognize scams and social engineering attacks facilitated by AI, as well as implementing stronger filters and rules within AI models to prevent unexpected behavior.

Moreover, organizations must continuously adapt their security measures, as threat actors will persistently seek new ways to exploit AI systems. By investing in training and awareness programs and encouraging users to report vulnerabilities through bug bounty programs, organizations can complement technical controls and enhance their overall defense against prompt injection attacks and other AI-related risks.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post