Towards Secure AI Week 22 – NIST’s New ARIA Program

Secure AI Weekly + Trusted AI Blog admin todayJune 3, 2024 73

Background
share close

Japanese police arrest man after computer viruses created by misusing AI

HITB SecNews, May 28, 2024

Japanese police have arrested a 25-year-old man from Kawasaki for allegedly using generative AI tools to create computer viruses. This rare and significant arrest brings to light the growing concerns regarding the misuse of AI technology. The suspect is accused of leveraging freely available AI programs to craft malicious code aimed at disrupting corporate data, showcasing the potential dangers when AI falls into the wrong hands.

This incident underscores the urgent need for enhanced security measures and stringent regulations to safeguard against such malicious uses of AI. It highlights the importance of proactively addressing the vulnerabilities associated with AI technologies to prevent similar threats in the future. Ensuring the safe and ethical deployment of AI is crucial to mitigating risks and protecting digital infrastructure from potential harm.

NIST Launches ARIA, a New Program to Advance Sociotechnical Testing and Evaluation for AI

NIST, May 28, 2024

The National Institute of Standards and Technology (NIST) has introduced the Assessing Risks and Impacts of AI (ARIA) program, aimed at advancing the understanding of AI’s societal impacts. This initiative focuses on sociotechnical testing and evaluation to ensure AI systems are safe, secure, and reliable when deployed in real-world settings. ARIA seeks to develop methodologies and metrics to measure AI’s performance and risks within societal contexts.

By supporting the U.S. AI Safety Institute, ARIA aims to build a foundation for trustworthy AI systems, addressing both qualitative and quantitative risks. This program expands on NIST’s AI Risk Management Framework, operationalizing its recommendations for comprehensive AI risk assessment. The overarching goal is to mitigate risks and maximize the benefits of AI, fostering safe integration into society.

Job seekers trying AI hacks in their resumes to pass screening – don’t do this

CyberNews, May 27, 2024

Recent trends reveal that job seekers are increasingly using AI tools to enhance their resumes, raising concerns about the authenticity and security of such applications. Many applicants employ AI-generated content to craft impressive resumes, sometimes including exaggerated or false information. This practice poses significant risks for employers who might unknowingly hire unqualified candidates, leading to potential security and performance issues within organizations.

The growing use of AI in resume creation underscores the need for better verification processes and safeguards. Companies must adopt robust methods to authenticate applicant information and ensure that hiring decisions are based on accurate, truthful data. This development highlights the broader implications of AI use and the importance of maintaining security and integrity in digital interactions.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post