Towards Trusted AI Week 29 – Challenges of Enterprise LLM Adoption
An AI detector mislabeled nearly every essay written by a non-native English speaker as being written by a bot Insider, July 13, 2023 Safety and security issues of AI systems ...
Secure AI Weekly + Trusted AI Blog admin todayJuly 29, 2023 51
White House, July 21, 2023
The Biden-Harris Administration has underscored its commitment to harness the transformative potential of Artificial Intelligence (AI), while simultaneously ensuring its responsible and secure use. Taking decisive steps towards safeguarding American citizens, the administration is in the process of developing an Executive Order and endorsing bipartisan legislation. Their commitment also extends to hosting a significant gathering at the White House, which includes seven leading AI companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. The aim of this initiative is to confirm the voluntary commitments secured from these firms, intended to foster responsible, safe, and transparent AI development. The administration strongly believes that these companies hold a responsibility to ensure their products pose no threat to public safety and is urging them to adhere to rigorous safety standards.
These immediate pledges emphasize three central tenets critical for the future of AI: safety, security, and trust. The administration is relentlessly working towards formulating an executive order and advocating bipartisan legislation to maintain America’s leadership in responsible innovation. The commitments made by the seven leading AI companies are comprehensive. They include conducting thorough safety checks of their AI systems, sharing information on AI risk management, prioritizing cybersecurity, and promoting transparency to establish public trust. Moreover, the companies are committed to researching the societal risks posed by AI systems and developing AI systems that can significantly contribute to addressing society’s most significant challenges.
As these domestic initiatives take shape, the Administration is also collaborating with international allies to formulate a robust global framework for AI development and use. Further, the Administration has previously undertaken numerous proactive measures, such as holding discussions with various stakeholders on AI-related risks, convening with CEOs of leading AI companies, and publishing a blueprint for an AI Bill of Rights. They have also signed an Executive Order to eliminate bias in the design and use of new technologies and made significant investment in establishing new National AI Research Institutes. A National AI R&D Strategic Plan has been released for responsible AI advancement, and draft policy guidance for federal agencies will soon be released to ensure the American people’s rights and safety are prioritized in the development and use of AI systems.
BBC, July 18, 2023
According to a recent BBC report, a high-ranking security official has urgently called for the incorporation of cybersecurity measures into artificial intelligence (AI) systems. Lindy Cameron of the National Cyber Security Centre highlighted the importance of establishing sturdy security protocols during AI’s early development stages. As firms scramble to create innovative AI products, there’s growing apprehension that security could become an afterthought, a negligence that a former intelligence chief warns could lead to “devastating” malicious attacks.
AI’s anticipated involvement in diverse facets of daily life, from residential and urban automation to high-level national security and warfare, is not without inherent risks. As Robert Hannigan, former head of the UK’s GCHQ, said, “attacks on these systems could prove catastrophic.” One major concern is the rapid race among companies to secure a foothold in the thriving AI market, potentially neglecting the essential consideration of misuse risks. Lindy Cameron warns of the considerable complexity of AI models and urges the application of basic principles from the onset of development to avoid future difficulties in retrofitting security.
The brittleness of AI systems presents an opportunity for malicious entities to exploit and manipulate these tools to their advantage, leading to a “cat and mouse game,” as Lorenzo Cavallaro, Professor of Computer Science at University College London described. This includes tactics like ‘poisoning’ the AI’s learning data with biased information, making detection and rectification challenging. The complexity and potential corruption of AI systems could lead to a crisis of trust, posing a significant challenge to the widespread adoption of AI. Amid theoretical concerns, signs of real-world attacks on systems are now surfacing, marking the beginning of an era where AI, if left unchecked, could become a tool for widespread disruption and national security threats.
Reuters, July 19, 2023
The United Nations Security Council has, for the first time, addressed the issue of artificial intelligence (AI) in a recent meeting. Both the United States and China expressed reservations about the potential misuses of AI, the former warning against potential human rights abuses, and the latter likening the unregulated technology to a “runaway horse”. Presiding over the meeting was Britain’s Foreign Secretary, James Cleverly, who highlighted AI’s transformative potential in all facets of human life, while stressing the urgent need for global governance of such boundary-less technology.
High-profile figures, such as U.N. Secretary-General Antonio Guterres, Anthropic co-founder Jack Clark, and China-UK Research Center for AI Ethics and Governance’s co-director Professor Zeng Yi, provided key insights to the 15-member council. Guterres pointed to the profound impact of both military and non-military uses of AI on global peace and security, supporting the creation of a new U.N. body to oversee the governance of this revolutionary technology. China’s U.N. Ambassador, Zhang Jun, also expressed support for a central U.N. role in setting guiding principles for AI, emphasizing the importance of human-centric regulation and a balance between scientific advancement and security.
On the other hand, Jeffrey DeLaurentis, Deputy U.S. Ambassador to the U.N., stressed the importance of international cooperation in addressing the human rights risks posed by AI and other emergent technologies, and sternly advised against any member state’s use of AI for suppression or disempowerment of their populace. Meanwhile, Russia’s Deputy U.N. Ambassador Dmitry Polyanskiy questioned the appropriateness of AI discussions within the Security Council, advocating instead for a detailed, scientific dialogue already underway in specialized platforms.
Security Week, July 21, 2023
Google has taken a pivotal step towards AI security by forming an AI Red Team, a specialized squad dedicated to testing and analyzing the resilience of artificial intelligence (AI) systems. This move follows the launch of the Secure AI Framework (SAIF), intended to secure the design, implementation, and protection of AI technologies. Google recently published a report which describes typical types of AI attacks and shares crucial lessons learned, emphasizing the significance of such red teaming exercises in understanding AI system vulnerabilities and potential threats.
This AI Red Team mirrors the functionality of traditional red teams but is armed with specialized AI expertise to orchestrate complex technical onslaughts on AI systems. Their role involves acting as hypothetical adversaries to examine the robustness of real-world AI products and features against plausible attacks. One notable example given by Google is “prompt engineering”, a prevalent AI attack where prompts are manipulated to force the system to respond as per the attacker’s design. For instance, an attacker could trick an AI-based phishing detection system into classifying a malicious email as safe by inserting an invisible command within the email.
The report also brings attention to risks associated with training data used for Large Language Models (LLMs). It highlights instances where personal data was extracted from an LLM, despite significant efforts to remove such sensitive information from these datasets. Access control to LLMs is emphasized as a crucial security measure, with an example where a student exploited an essay-grading LLM to consistently award top grades to papers containing a specific keyword. Concluding its report, Google underscores the need for collaboration between traditional red teams and AI specialists, and also admits that some challenges may be complex and not easily resolved. Traditional security measures can be potent for many risks, but certain AI system attacks may require a more layered security strategy.
Fagen wasanni, July 21, 2023
Artificial Intelligence (AI) corporations are amplifying their efforts to secure user content safety, particularly pertaining to AI chatbots that acquire knowledge from online data and user engagements. Duke University Professor Sultan Meghji, who is also a former Chief Innovation Officer for the FDIC, has highlighted concerns about the drawbacks of AI. He underscores that extensive language models, utilized by AI systems, risk being influenced by the undesirable aspects of reinforcement learning. AI firms are responding by taking exhaustive measures, including refining their AI model training processes and data sources. These efforts help reduce the chances of AI systems learning and disseminating harmful or biased content.
In addition, AI firms are employing ethical directives and governance structures, which can steer AI developers to manage chatbot interactions in accordance with ethical standards, thereby avoiding potential harm to users. They are also investing in R&D to strengthen their AI systems against manipulation, including adversarial attacks where users intentionally supply misleading information. Furthermore, AI firms are looking at methods to enhance AI models’ understanding of biases in the data they are trained on, with a view to decrease the reinforcement of pre-existing biases in chatbot interactions.
AI developers are also leveraging the power of user feedback for continuous improvement of AI chatbot safety. By seeking user feedback and enabling mechanisms to report issues, AI companies can gain insights and make timely amendments to their models, thereby enhancing the safety and reliability of the AI chatbot experience. While prioritizing user safety, AI firms understand the necessity of balancing safety and user experience. They are employing sophisticated natural language processing abilities to create chatbots that provide useful, accurate information while minimizing potential risks. This balance allows for an enhancement of user experiences while ensuring their safety.
Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.
Written by: admin
Secure AI Weekly admin
An AI detector mislabeled nearly every essay written by a non-native English speaker as being written by a bot Insider, July 13, 2023 Safety and security issues of AI systems ...
Adversa AI, Trustworthy AI Research & Advisory