Towards Trusted AI Week 29 – Challenges of Enterprise LLM Adoption

Secure AI Weekly + Trusted AI Blog admin todayJuly 21, 2023 63

Background
share close

An AI detector mislabeled nearly every essay written by a non-native English speaker as being written by a bot

Insider, July 13, 2023

Safety and security issues of AI systems are under increasing scrutiny, as Stanford University research reveals that AI detection tools are incorrectly identifying essays written by non-native English speakers as AI-generated. In the study, more than 100 essays composed by non-native English speakers were analyzed by seven popular GPT detectors, alongside essays written by US eighth-grade native English speakers. Over half of the non-native English speakers’ essays were incorrectly labeled as AI-generated, with one detector marking almost 98% of these essays as AI-written. However, these detectors performed better on the native English speakers’ essays, wrongly assessing about 20% as AI-produced.

These findings pose significant concerns about potential negative consequences for non-native English speakers, who could be unjustly accused of using AI tools like chatbots for their schoolwork, college applications, or professional duties. AI-detection software, marketed as a countermeasure to misinformation produced by AI-tools such as ChatGPT, Dall-E, and MidJourney, is now in question regarding its effectiveness. The study reaffirms calls for addressing the inherent biases in these AI systems. The results were attributed to a metric called “text perplexity” used by many GPT-detection systems, measuring the unpredictability of word choices in a text. Consequently, non-native English speakers’ essays are being incorrectly flagged due to their limited linguistic range leading to low perplexity scores.

The study also criticized AI image detection programs, citing a New York Times report on the successful deception of five such tools. ChatGPT’s launch was accompanied by OpenAI’s admission of potential bias and harmful response issues, with the company pledging to improve. OpenAI’s CEO, Sam Altman, admitted to ChatGPT’s shortcomings in a tweet earlier this year, promising improvements, and voicing optimism about AI’s role in reducing societal biases.

The dark side of AI

Technology Decisions, July 13, 2023

CMTG, a leader in cybersecurity, has raised concerns about the rapid adoption of artificial intelligence (AI) and the associated risks. The firm’s Founder and Director, Carl Filpo, stressed the importance of addressing cybersecurity issues as AI continues to permeate various aspects of life. Despite the benefits, Filpo emphasized that AI exposes us to new challenges and vulnerabilities that could threaten the protection of sensitive data against cyber threats. He pointed out that the fast-paced integration of AI systems has broadened the spectrum of targets for cybercriminals, particularly because of the vast volumes of data these systems process.

AI systems are prime targets for hackers due to the extensive amount of personal data they use to make accurate predictions and decisions. If this data isn’t adequately protected, it could be a potential goldmine for malicious actors, Filpo explained. As such, organizations need to establish strong data protection measures, including data encryption, access controls, and anonymization techniques. These safeguards uphold privacy standards while taking advantage of the capabilities of AI technologies.

Moreover, Filpo warned of adversarial attacks on AI systems, where malevolent entities manipulate AI algorithms, introduce harmful data, or corrupt training datasets. Such actions can severely compromise the reliability and integrity of AI-powered systems, leading to potential business harm. The misuse of AI in image recognition systems, for instance, could have severe repercussions, especially in critical sectors like autonomous driving or medical diagnostics. As AI continues to drive innovation and efficiency across various sectors, Filpo concluded by emphasizing the need for a proactive approach to cybersecurity to minimize the risks associated with AI adoption.

The Challenges of Enterprise AI and LLM Adoption

Medium, July 12, 2023

Advancements in the deployment of Large Language Models (LLMs) in the business sphere bring a host of technical complexities to light. Subjects like technology choices, constructing efficient data pipelines, effective knowledge management, model quality, integration issues, and importantly, cybersecurity, demand attention. With no further ado, let’s dive into these multi-faceted aspects.

The anticipated evolution of AI hints at unique forms of language-based generative AI which could bring an entirely new set of cybersecurity puzzles. These advanced systems, with the potential to not only generate and interpret text that closely mimics human communication but also execute a range of distinct tasks, could become more autonomous than ever. This situation brims with undiscovered vulnerabilities and calls for a reassessment of current cybersecurity practices. Potential compromises of AI systems could lead to significant data breaches, possibly exposing confidential company information. The evolution of cyber threats, from simple security breaches to potential large-scale orchestrated disruptions, can be attributed to sophisticated forms of adversarial attacks, data poisoning, and even the manipulation of human users through advanced social engineering techniques employed by AI systems.

In conclusion, as the adoption of advanced language models into enterprise ecosystems is in its nascent stage, it presents a plethora of opportunities along with potential risks. This landscape of AI cyber risk will likely continue to be a prominent feature on the strategic agenda for decision-makers in enterprises, necessitating their constant vigilance and informed discernment. As this exploration into the technological side of adopting AI and LLMs in the business world concludes, the next focus will be on the human and organizational challenges that accompany this new era of technology.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post