Towards Secure AI Week 48 – Biggest AI Security Bug Bounty

Secure AI Weekly + Trusted AI Blog admin todayDecember 4, 2024 10

Background
share close

Artificial Intelligence Vulnerability Scoring System (AIVSS)

GitHub

The AI Vulnerability Scoring System (AIVSS) has been proposed as a framework designed to evaluate vulnerabilities in AI systems comprehensively. Unlike static models, AIVSS incorporates dynamic metrics tailored to AI, including model robustness, data sensitivity, ethical impact, and adaptability, alongside traditional security considerations. By quantifying these unique aspects, AIVSS provides a more nuanced assessment of risks, offering a systematic way to identify and address potential threats in AI systems.

AIVSS operates through a structured scoring methodology that integrates base, AI-specific, and impact metrics, all adjusted by temporal factors like exploit maturity and remediation availability. The scoring formula, which yields a value between 0 and 10, enables clear categorization of risks into severity levels from “None” to “Critical.” Its implementation involves evaluating key metrics, such as attack complexity, data sensitivity, and safety impact, to generate a vulnerability score that reflects real-world risks. This framework not only highlights the security and safety challenges unique to AI systems but also encourages collaboration within the community to refine its components, ensuring that AI technologies are robust, secure, and ethically aligned as they evolve.

Microsoft launches $4M bug bounty challenge to secure AI, cloud

SCMedia, November 29, 2024

Microsoft has launched the Zero Day Quest, a groundbreaking bug bounty program offering up to $4 million in rewards for identifying vulnerabilities in its cloud and artificial intelligence (AI) systems. The initiative aims to bolster AI security by encouraging high-impact research and fostering stronger collaboration between Microsoft and the global security community. The program includes a unique multiplier for vulnerability submissions within specific scenarios, enhancing the potential payouts for researchers. Participants also have the opportunity to attend an onsite hacking event at Microsoft’s Redmond headquarters in 2025, further emphasizing the importance of hands-on collaboration.

In an effort to prioritize the security and safety of AI, Microsoft has doubled its bounty payments for AI-related vulnerabilities and provided researchers with direct access to its AI engineers and penetration testers. However, strict rules govern participation, prohibiting activities such as unauthorized data access, denial-of-service attacks, and phishing attempts. Tom Gallagher, Vice President of Engineering at the Microsoft Security Response Center, highlighted that the program’s scope goes beyond uncovering vulnerabilities. “It’s about building new and strengthening existing partnerships between the Microsoft Security Response Center, product teams, and external researchers,” he explained, underscoring the broader mission of advancing AI safety and security through collective effort.

InputSnatch – A Side-Channel Attack Allow Attackers Steal The Input Data From LLM Models

CyberSecurity News, November 30, 2024

Cybersecurity researchers have uncovered a significant vulnerability in large language models (LLMs), highlighting potential threats to user privacy. The attack, named InputSnatch, exploits timing variations in cache-sharing mechanisms—commonly used to enhance LLM inference performance. These mechanisms, such as prefix caching and semantic caching, inadvertently leak information, allowing attackers to reconstruct user queries with alarming precision. In tests, InputSnatch achieved 87.13% accuracy in detecting cache-hit prefix lengths and up to 100% success in extracting semantic inputs from legal consultation systems. Such vulnerabilities are particularly concerning for applications handling sensitive data in healthcare, finance, and legal services.

The study emphasizes the delicate balance between performance optimization and privacy protection. While caching significantly improves LLM efficiency, it also introduces exploitable security gaps. Researchers urge LLM providers to reassess their caching strategies and adopt robust privacy-preserving measures to prevent timing-based side-channel attacks. As LLMs become integral to critical industries, addressing these vulnerabilities is essential to ensuring user security and maintaining trust in AI-powered systems.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post