Towards Secure AI Week 13 – Advancing AI Governance and Security

Secure AI Weekly + Trusted AI Blog admin todayApril 1, 2024 56

Background
share close

California Releases Generative AI State Procurement Guidelines

Government Technology, March 22, 2024

In response to Governor Gavin Newsom’s Executive Order N-12-23, which called for a closer examination of generative AI technologies, new directives have been introduced to fortify the security and safety measures surrounding AI within state agencies and vendor engagements.

Released as the “GenAI Guidelines for Public Sector Procurement, Uses and Training,” these comprehensive guidelines offer updated definitions of AI and generative AI (GenAI), while laying down stringent requirements governing both incidental and intentional acquisition and utilization across governmental bodies and programs. Developed collaboratively by key entities including the Government Operations Agency (GovOps), California Department of Technology (CDT), Department of General Services (DGS), Office of Data and Innovation (ODI), and the Department of Human Resources (CalHR), these guidelines impose significant responsibilities on agencies and vendors alike.

For incidental AI acquisitions, agencies must designate an executive-level team member for continuous monitoring and evaluation, ensure mandatory training for executive and procurement teams, and conduct annual reviews of training and policies to maintain acceptable tool usage standards. Meanwhile, intentional procurement processes demand a meticulous approach, including the identification of business needs, fostering communication between state staff and end users, assessing risks and impacts, preparing high-quality data inputs, conducting model testing, and establishing dedicated teams for ongoing evaluation of GenAI utilization across operations. Additionally, state entities are mandated to conduct a Generative Artificial Intelligence Risk Assessment (SIMM 5305-F) to gauge the level of risk exposure associated with planned GenAI deployments, while vendors are required to disclose any GenAI technology involved in procurement processes.

Third-party testing as a key ingredient of AI policy

Anthropic, March 25, 2024

The complexity and capabilities of large-scale generative AI systems, exemplified by projects like Claude, underscore the urgency for policy interventions to align with the evolving landscape of AI technologies. Third-party testing offers a structured approach to assess AI model behavior, particularly concerning critical issues such as election integrity, discrimination, and national security risks. As AI systems advance, the need for robust oversight and testing mechanisms becomes more pronounced, serving as a complement to sector-specific regulations and laying the groundwork for broader policy frameworks.

The development of a third-party testing regime presents an opportunity to manage the challenges posed by AI while fostering innovation and societal trust. By instituting precisely scoped tests and leveraging trusted third-party entities for evaluation, such a regime can instill confidence in AI systems while minimizing regulatory burdens, particularly for smaller companies. Collaboration between countries and regions in establishing shared standards and exploring Mutual Recognition agreements further enhances coordination in AI governance, promoting global consistency and accountability.

Ultimately, a well-structured third-party testing ecosystem is crucial for enhancing the safety and reliability of AI systems. As AI technologies continue to evolve, it is essential to prioritize testing initiatives that not only address current challenges but also anticipate future risks. By fostering collaboration and transparency among stakeholders, we can build a resilient framework that safeguards against potential harms while unlocking the transformative potential of AI for society.

U.S. Department of the Treasury Releases Report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Sector

U.S. DEPARTMENT OF THE TREASURY, March 27, 2024

The U.S. Department of the Treasury has released a comprehensive report titled “Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector” in response to Presidential Executive Order 14110. Led by the Treasury’s Office of Cybersecurity and Critical Infrastructure Protection (OCCIP), the report addresses the pressing need to ensure the safe and secure development and utilization of artificial intelligence (AI) within the financial industry. Under Secretary for Domestic Finance Nellie Liang emphasized the Biden Administration’s dedication to collaborating with financial institutions to harness emerging technologies while safeguarding operational resilience and financial stability.

The report identifies critical areas for immediate action to mitigate operational risks, cybersecurity threats, and fraud challenges associated with AI deployment in the financial sector. Among these are bridging the capability gap between large and small financial institutions in AI system development, enhancing data sharing for fraud prevention, and coordinating regulatory efforts to address oversight concerns. Additionally, the report underscores the importance of establishing best practices for data supply chain mapping and implementing standardized descriptions, akin to “nutrition labels,” for vendor-provided AI systems and data providers.

Informed by in-depth interviews with stakeholders across the financial services sector and technology companies, the report offers a comprehensive overview of AI use cases for cybersecurity and fraud prevention. While it refrains from imposing requirements or endorsing specific AI applications, the report provides valuable insights and recommendations for industry stakeholders and regulators alike. Looking forward, the Treasury aims to collaborate with private sector entities, federal and state regulators, and international partners to address the multifaceted challenges posed by AI in the financial sector.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post