FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
The White House, October 30, 2023
President Joe Biden has issued a decisive Executive Order to position the United States as a leader in the realm of artificial intelligence (AI), ensuring the nation captures the technology’s full potential while safeguarding against its risks. This directive reinforces AI safety and security, underpins privacy rights, promotes fairness and civil liberties, and advocates for consumer and employee protection. It also encourages innovation, competition, and reinforces U.S. leadership internationally. The order is part of a comprehensive initiative by the Biden-Harris Administration that expands on prior presidential actions, which have already garnered voluntary commitments from 15 top-tier companies for the development of safe and reliable AI.
To bolster national safety and security in the face of growing AI capabilities, the Executive Order mandates that developers of pivotal AI technologies report their safety testing outcomes to the government. This move aligns with the Defense Production Act, ensuring that AI with potential implications for national security, the economy, or public health undergoes rigorous testing. The NIST is tasked with formulating strict standards and testing protocols to pre-emptively secure AI systems. Meanwhile, the Department of Homeland Security is designated to enforce these standards across vital sectors, signifying the most vigorous government-led AI safety effort to date. Additionally, the Order’s provisions extend to the prevention of AI-facilitated biological threats and aim to establish a fortified defense against AI-related fraud, ensuring Americans receive authenticated information from their government.
The Executive Order also includes strategic development of a cybersecurity program using AI to identify and patch software vulnerabilities, building on the current AI Cyber Challenge. It calls for crafting a National Security Memorandum to guide military and intelligence applications of AI, setting a path for ethical, secure, and effective usage. By these actions, President Biden’s administration not only reasserts American ambition in AI but also underlines a commitment to ethical innovation, ensuring AI advancement benefits society while protecting citizens’ safety and security.
AI Bot Taken Down After Allegations Of Racist Stereotypes At Harvard
Free Press Journal, October 25, 2023
An incident at Harvard has brought to light the problematic issue of AI inheriting and possibly amplifying human biases, specifically concerning racial prejudice. An AI bot created by the Harvard Computer Society AI Group, named ClaudineGPT along with a variant termed “Slightly Sassier ClaudineGPT,” was criticized for perpetuating racist stereotypes, coinciding with the inauguration of Harvard’s President, Claudine Gay.
The controversy arose when the AI Safety Student Team (AISST) at Harvard alleged that the AI’s programmed instructions were fostering stereotypes by prompting it to generate ‘angry and sassy’ responses. This concern was communicated through an email that questioned the intent behind the model’s deployment, highlighting a broader industry trend where AI products demean marginalized communities. The email, from AISST’s communications director Chinmay M. Deshpande, expressed apprehension about the model, terming it “problematic” for echoing racist tropes and demanded its removal to avoid the perpetuation of harmful stereotypes.
In response to the ensuing outcry, the HCS AI Group asserted that the model was satirical and meant for entertainment during the inauguration, never intended to be a serious representation of President Gay. Despite their explanation that ClaudineGPT was a joke not aimed at causing offense, the AI model was promptly made inaccessible post-inauguration. This step by HCS AI Group highlights the critical need for clear ethical standards in AI development to prevent the dissemination of biased or discriminatory content, ensuring that AI’s advancement does not come at the cost of social equity and respect.
Britain publishes ‘Bletchley Declaration’ on AI safety
Reuters, November 1, 2023
The United Kingdom, alongside 28 other nations and the European Union, has unveiled the “Bletchley Declaration,” a commitment to enhance international coordination on the safety of artificial intelligence (AI). Released at the commencement of the AI Safety Summit at the historic Bletchley Park in England, the declaration represents a collective step forward in addressing the intricacies of AI advancements. The participating countries underscore the necessity for a unified approach to comprehend and manage AI’s potential threats, while also exploring its opportunities for progress.
In an era where AI’s impact is both promising and formidable, the declaration calls for greater visibility into the practices of organizations at the forefront of AI technology. It advocates for a system that obliges these entities to rigorously evaluate, report, and regulate the potentially adverse aspects of their AI applications. Reflecting on the declaration’s significance, British Prime Minister Rishi Sunak praised it as a milestone towards securing a safer AI future for coming generations by cementing an understanding among leading AI nations of the need for prompt action to evaluate the hazards associated with AI technologies.
The declaration outlined a dual-faceted strategy that aims to both pinpoint and comprehend AI-related risks and develop international policies to address them. This involves promoting open practices among private sector AI innovators, creating accurate benchmarks for evaluation, enhancing safety testing methods, and fostering government and research institutions’ capacities to handle AI safety issues. This initiative marks a significant commitment by world powers to not only acknowledge the challenges posed by AI but also to collaboratively work towards reducing them, ensuring that AI development is aligned with global safety and research efforts.
Chatbots are so gullible, they’ll take directions from hackers
The Washington Post, November 2, 2023
As artificial intelligence (AI) increasingly permeates our lives, the introduction of chatbots as personal assistants offers both convenience and a new set of security challenges. Equipped with advanced language processing capabilities, these AI assistants excel at tasks such as managing emails, searching for files, summarizing content, and engaging in dialogue. However, their ability to distinguish between legitimate instructions and potentially malicious directives from unauthorized users remains a critical vulnerability. Cybersecurity experts warn that the risk of exploitation by nefarious actors through this weakness is a significant concern.
AI systems, particularly those reliant on expansive language models, have been identified as susceptible to “prompt injection” attacks. This form of cyberattack involves covertly feeding the AI with deceptive commands, which can result in serious breaches, such as the dissemination of malware or the propagation of false information. Despite their infancy, the security fields surrounding these large language models (LLMs) are witnessing increased attention from both researchers and regulatory bodies. High-profile investigations and advisories, such as those by the Federal Trade Commission and Britain’s National Cyber Security Center, underscore the gravity of the situation and have led to calls for enhanced safety measures, including the development of robust testing standards by AI developers, as per a recent executive mandate from the White House.
The overarching concern with AI chatbots stems from their programming, which lacks the ability to critically evaluate the origin and intent behind the information presented to them. This gap in discernment is not just a theoretical flaw but a real-world security gap that could be exploited, leading to unauthorized access to sensitive data or the spreading of harmful messages without the user’s knowledge. Although there have been no confirmed incidents outside controlled experiments, the potential for high-stakes breaches is increasing in parallel with the popularity of AI personal assistants. Experts and developers are thus confronted with the imperative to establish more sophisticated defenses against these cyber threats, ensuring that AI operates within a framework that prioritizes user security and transparency about the associated risks.
Subscribe for updates
Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.