
Adversa AI Joins Coalition for Secure AI (CoSAI)
February 14, 2025 – Adversa AI, a leading authority in AI security and safety, is proud to announce its sponsorship of the Coalition for Secure AI (CoSAI). In conjunction with ...
Secure AI Weekly + Trusted AI Blog admin todayFebruary 17, 2025 18
DataBricks, February 12, 2025
Databricks has unveiled the second edition of its AI Security Framework (DASF 2.0), a comprehensive guide designed to address the growing risks associated with AI deployments. The framework identifies 62 technical AI risks and introduces 64 mitigation controls, offering an end-to-end risk profile for AI systems. By aligning with leading standards like MITRE ATLAS, NIST 800-53, and the EU AI Act, DASF 2.0 empowers organizations to integrate security into their AI innovation. It also includes the new DASF Compendium, a practical tool for mapping risks and controls to industry standards, making it easier to operationalize AI security.
To foster a secure AI culture, Databricks has introduced upskilling resources, such as an AI Security Fundamentals Course, how-to videos, and AI Risk Workshops. These initiatives aim to educate stakeholders on best practices for managing AI risks. Additionally, the updated Security Analysis Tool (SAT) helps organizations monitor adherence to AI security guidelines. With DASF 2.0, Databricks bridges the gap between innovation and governance, equipping organizations to harness AI’s transformative potential while ensuring its safe and responsible use.
Data&Society, February 9, 2025
A promising solution, inspired by cybersecurity and military practices, is red-teaming—a method where adversarial techniques are used to uncover vulnerabilities in AI systems. A new report, Red-Teaming in the Public Interest, based on a collaborative effort between Data & Society and the AI Risk and Vulnerability Alliance (ARVA), examines how this approach is being tailored for genAI evaluation. Drawing from 26 interviews and observations at three public red-teaming events, the report highlights how red-teaming can play a critical role in identifying and addressing AI risks.
The report underscores that red-teaming genAI raises complex questions beyond its methodology, such as determining whose interests are protected, defining problematic behaviors, and clarifying the role of the public in the process. The authors propose an expanded vision of red-teaming—one that goes beyond testing finalized systems to involve the public in assessing genAI harms at multiple stages of development. This broader approach seeks to ensure that public safety and interests remain central to AI governance. To explore this further, an online discussion on Thursday, February 20, will delve into red-teaming’s role in shaping a safer and more accountable landscape for genAI systems.
Gov.uk, February 14, 2025
The UK government has rebranded its AI Safety Institute as the AI Security Institute, signaling a sharper focus on safeguarding national security and protecting citizens from AI-related threats. This shift, announced on February 14, 2025, is a key element of the government’s Plan for Change, which aims to harness AI’s economic potential while addressing risks like cyberattacks, fraud, and misuse in creating illegal content. The institute will launch a new criminal misuse team, working with the Home Office to prevent AI from enabling crimes such as child exploitation. Partnering with organizations like the National Cyber Security Centre (NCSC) and the Ministry of Defence, the institute will strengthen its ability to assess risks and provide evidence-based guidance to policymakers.
In addition, the government has established a partnership with AI company Anthropic to explore how AI can enhance public services and drive scientific breakthroughs. This collaboration, led by the Sovereign AI unit, highlights the UK’s dual focus on innovation and security. Ian Hogarth, Chair of the AI Security Institute, emphasized the importance of addressing public safety concerns, while Anthropic CEO Dario Amodei expressed commitment to ensuring AI’s secure and responsible use. Together, these efforts reflect the UK’s commitment to advancing AI in a way that prioritizes safety, trust, and economic growth.
Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.
Written by: admin
Company News admin
February 14, 2025 – Adversa AI, a leading authority in AI security and safety, is proud to announce its sponsorship of the Coalition for Secure AI (CoSAI). In conjunction with ...
Adversa AI, Trustworthy AI Research & Advisory