Towards Trusted AI Week 20 – Managing Risks in ChatGPT Integrations

Secure AI Weekly + Trusted AI Blog admin todayMay 17, 2023 121

Background
share close

If you want more news and valuable insights on a weekly and even daily basis, follow our LinkedIn to join a community of other experts discussing the latest news. 

 

4 Risk Factors Devs Should Consider for ChatGPT Integrations

DevOps.com, May 12, 2023

OpenAI has recently launched a low-cost API for developers to leverage ChatGPT in their applications, and many engineering teams have jumped headlong into new integrations. However, cybersecurity and risk experts have warned that organizations that leverage the ChatGPT API in their software will need to navigate a minefield of privacy, data governance, and security operational risks. The areas that experts suggest that developers, DevOps teams, and CTOs start thinking about to avoid potentially costly unintended consequences of ChatGPT integration in their software include data privacy and governance concerns, intellectual property infringement, adding new cybersecurity threat vectors, and problems with bias. Developers should think about IP and copyright disputes, resolve copyright disputes, determine the mechanism for resolving IP and generate copyright, and understand that generative AI systems can continue to make wrong, offensive, or problematic content that can lead to negligence and discrimination issues.

The risk of deploying ChatGPT in an application is real and significant. Data privacy and governance concerns may arise when users enter sensitive data into a ChatGPT-backed prompt, which can expose a world of data exposure and regulatory compliance concerns. Intellectual property infringement concerns should be thought through when it comes to the responses generated by an app tapping into ChatGPT. The generated output from ChatGPT can introduce potential problems of AI bias that can lead to discrimination issues. Finally, developers and software design teams should consider what happens to the software’s attack surface when integrating ChatGPT into their applications.

The potential of ChatGPT in an application is enormous, but developers should not overlook the risks and consequences associated with its deployment. Developers and organizations need to implement measures that align with their privacy policies and with any local, federal, or international data privacy laws. As such, before deployment, developers should perform some risk analysis and threat modeling, which can help them navigate the minefield of privacy, data governance, and security operational risks associated with ChatGPT integration in their software.

Dataset bias and adversarial examples: AI’s data problem

The Drum, May 10, 2023

Artificial intelligence (AI) is a rapidly evolving field that has garnered significant attention, raising discussions about its future applications, potential job impacts, and predictive capabilities. However, it is crucial to approach AI with caution, as relying solely on its answers can be perilous. The fundamental factor behind AI’s effectiveness is data quality. AI systems learn from data, and the accuracy and representativeness of the training data directly affect their performance.

One example of the risks associated with AI lies in the case of Tay, a Microsoft chatbot designed for Twitter. Within a short time of its launch, Tay learned to exhibit racist and discriminatory behavior. This happened because Tay was trained on Twitter data, which included offensive content posted by users. It highlights the inherent biases and assumptions present in the data, reflecting the perspectives of those who contribute to or curate it.

Another concern arises from biased data affecting AI systems used in predictive policing. Algorithms employed to identify crime hotspots and potential suspects have been found to exhibit prejudice. For instance, a predictive policing algorithm used in Florida labeled black defendants as high-risk for future crimes twice as often as white defendants with similar criminal histories. Such biases stem from the data on which the AI system was trained, underscoring the need for representative and unbiased datasets.

To address these challenges, it is crucial to provide AI systems with large volumes of high-quality data that accurately represent the tasks they are designed to perform. The training data should encompass diverse scenarios and variations that the AI system is likely to encounter in real-world applications. Additionally, ethical considerations and human values should be embedded in the design of AI systems and the data used for training, ensuring fairness, accuracy, and reliability in their outcomes.

AI Act: a step closer to the first rules on Artificial Intelligence

European Parliament, May 11, 2023

In a landmark decision, the Members of the European Parliament (MEPs) have taken a decisive step towards safeguarding the security and integrity of Artificial Intelligence (AI) systems. MEPs have overwhelmingly supported the world’s first-ever rules on AI, marking a significant milestone in regulating this rapidly advancing technology. The regulations, endorsed with an impressive majority of 84 votes in favor, underscore Europe’s commitment to a human-centric and ethical approach to AI development.

The endorsed draft negotiating mandate reflects MEPs’ commitment to transparency, risk management, and oversight of AI systems. By introducing crucial amendments to the Commission’s proposal, MEPs aim to ensure that AI systems are safe, transparent, non-discriminatory, and environmentally friendly. The regulations adopt a risk-based approach, categorizing AI systems based on their potential risks and establishing corresponding obligations for providers and users. Systems posing an unacceptable risk to people’s safety, including those employing manipulative techniques or engaging in social scoring, are strictly prohibited.

Moreover, the regulations embrace a comprehensive framework to address the challenges posed by different types of AI. MEPs have expanded the scope of high-risk areas to include not only health and safety but also potential harm to fundamental rights, the environment, and political processes. The regulations pay special attention to foundation models like GPT, imposing additional transparency measures and ensuring compliance with fundamental rights, data protection, and copyright regulations. By striking a balance between innovation and protection, the regulations provide exemptions for research activities and open-source components while promoting the establishment of regulatory sandboxes for testing AI systems prior to deployment.

The MEPs’ endorsement of these pioneering AI regulations signifies a crucial milestone in Europe’s journey towards responsible AI development. By prioritizing security, transparency, and ethical considerations, Europe aims to lead the global discourse on AI governance. As the draft negotiating mandate moves towards endorsement by the entire Parliament, Europe stands poised to establish itself as a global leader in shaping the future of AI, providing legal certainty to businesses, protecting fundamental rights, and fostering innovation.

EU lawmakers back transparency and safety rules for generative AI

TechCrunch, May 11, 2023

Ensuring the security and safety of artificial intelligence (AI) has become a top priority for policymakers, as highlighted by the recent series of votes in the European Parliament. MEPs have backed a range of amendments to the draft AI legislation, focusing particularly on foundational models that form the basis of generative AI technologies like OpenAI’s ChatGPT. The approved amendments place obligations on providers of foundational models to implement safety checks, data governance measures, and risk mitigations before bringing their models to market. These requirements include considering potential risks to health, safety, fundamental rights, the environment, democracy, and the rule of law.

Moreover, the amendments emphasize the need for foundational model makers to reduce energy consumption and resource usage in their systems. Providers of generative AI technologies, such as ChatGPT, are also obligated to comply with transparency regulations, ensuring users are aware that the content is machine-generated. Adequate safeguards must be in place to address the content generated by AI systems, and providers must provide a summary of any copyrighted materials used in training their AI models. The intention behind these measures is to foster a human-centric and trustworthy AI environment while protecting fundamental rights and strengthening democratic oversight.

The European Parliament’s focus on AI regulation extends beyond foundational models. MEPs have also addressed concerns related to biometric surveillance, with changes agreed upon to enhance protections for fundamental rights. The expanded list of prohibited practices now includes bans on real-time and post-remote biometric identification systems, biometric categorization based on sensitive characteristics, predictive policing systems, emotion recognition systems in various contexts, and indiscriminate scraping of biometric data. While these amendments demonstrate a commitment to safeguarding privacy and human rights, enforcement of these prohibitions on foreign entities that may violate EU rules remains a challenge. However, the European Parliament’s adoption of the amendments marks an important step in shaping the AI Act, setting the stage for further negotiations and a final compromise.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post