Towards Secure AI Week 2 – Unpacking NIST’s AI Framework

Secure AI Weekly + Trusted AI Blog admin todayJanuary 22, 2024 114

Background
share close

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

NIST, January, 2024

In its comprehensive report on Trustworthy and Responsible Artificial Intelligence, the National Institute of Standards and Technology (NIST) presents a detailed classification and vocabulary for understanding adversarial machine learning (AML). This report, centered around the security and safety of AI systems, thoroughly examines the various aspects of AML. It categorizes the extensive literature into a structured format, highlighting the different methods used in machine learning, stages of potential attacks, the objectives and strategies of attackers, as well as their understanding and capabilities regarding the AI learning process.

Moreover, the report delves into strategies for countering and controlling the impact of such attacks. It underscores the importance of addressing these challenges throughout the AI system’s lifecycle. The terminology adopted in the report aligns with established AML literature and is further enhanced by a comprehensive glossary. This glossary is designed to aid those not specialized in the field, clarifying key concepts related to AI system security.

The combination of this taxonomy and terminology serves a crucial role. It aims to guide the development of standards and future guidelines for evaluating and securing AI systems. By providing a unified language and clear understanding of the evolving field of AML, the report sets a foundation for improving the security measures in AI, ensuring its responsible and safe application.

Amazon Is Selling Products With AI-Generated Names Like “I Cannot Fulfill This Request It Goes Against OpenAI Use Policy”

Futurism, January 12, 2024

Amazon’s marketplace, known for its vast selection, has recently come under scrutiny for a peculiar issue: product listings with misleading or inaccurate descriptions, raising serious concerns about the security and safety in AI applications in e-commerce. The situation is highlighted by a dresser listed with conflicting information – its name bizarrely includes a statement about violating OpenAI’s use policy, while the description inaccurately states the number of drawers. This inconsistency suggests a haphazard use of AI tools like ChatGPT in generating product descriptions and names, without proper verification. This trend is worrying, as it points to a potential manipulation of search engine results to increase product visibility, thereby compromising the integrity of online shopping.

The problem extends beyond a single listing. A broader search on Amazon reveals several products with similar AI-generated content, including various outdoor and household items. This widespread issue suggests a systemic problem in the e-commerce sector, where vendors are increasingly relying on AI to automate content creation. Amazon’s response to these issues is crucial, as the company emphasizes its commitment to ensuring accurate and trustworthy product listings from third-party sellers. However, the effectiveness of their oversight and review processes in this context is yet to be fully ascertained.

This trend in Amazon’s marketplace is a part of a larger, concerning pattern in online retail. The growing reliance on AI for creating product descriptions and names with minimal human oversight reflects a broader shift in e-commerce practices. This shift raises significant concerns about the responsibility of platforms like Amazon in facilitating these practices, as well as the implications for consumer safety and trust in online shopping. As AI continues to integrate into various aspects of e-commerce, the need for stringent checks and balances to ensure the security and safety of AI applications becomes increasingly paramount.

IN LEAKED AUDIO, MICROSOFT CHERRY-PICKED EXAMPLES TO MAKE ITS AI SEEM FUNCTIONAL

The_Byte, January 13, 2024

Microsoft has been spotlighted for selectively showcasing the capabilities of its generative AI, particularly with its Security Copilot tool, a ChatGPT-like AI aimed at aiding cybersecurity tasks. As reported by Business Insider, this insight emerged from leaked audio of an internal presentation on an early version of the tool. The audio reveals that the AI, while analyzing Windows security logs for malicious activity, often produced ‘hallucinated’ or incorrect responses. Lloyd Greenwald, a Microsoft Security Partner featured in the presentation, admitted the need to ‘cherry-pick’ examples that portrayed the AI positively due to its stochastic nature and the challenge of consistently obtaining accurate answers.

Security Copilot operates similarly to a chatbot, providing responses in a customer service rep style. It’s primarily built on OpenAI’s GPT-4 large language model, the same technology behind Microsoft’s Bing Search assistant. Early access to GPT-4 allowed Microsoft to explore its capabilities in the realm of cybersecurity. However, the initial stages weren’t without challenges. The AI, much like the early versions of Bing AI, frequently produced erroneous responses. This issue of ‘hallucination’, where AI generates false or irrelevant information, is a widespread problem in large language models (LLMs). According to Greenwald, Microsoft has been actively working to reduce these inaccuracies by incorporating real data, but initially, GPT-4 was used without specific training in cybersecurity, relying on its general dataset.

The leaked audio also disclosed that Microsoft presented a set of security questions to government officials, but it’s unclear whether these examples were among the selectively chosen ones. This raises questions about the transparency and reliability of AI presentations to potential clients. A Microsoft spokesperson clarified to Business Insider that the technology discussed was part of preliminary work preceding Security Copilot, focusing on simulated rather than real-world scenarios. This revelation underscores the importance of accurately representing AI capabilities, especially in critical fields like cybersecurity, and highlights the ongoing challenges in ensuring the safety and security of AI applications. 

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post