Towards Trusted AI Week 44 – Takeaways from Securing AI Summit

Secure AI Weekly + Trusted AI Blog admin todayNovember 2, 2023 65

Background
share close

16 Takeaways from the 2023 Securing AI Summit in San Francisco

BGV, October 24, 2023

The rapid integration of Generative AI (GenAI) technologies like ChatGPT into various industries is a double-edged sword. While these advancements promise significant productivity boosts, they also bring forth heightened concerns regarding cybersecurity. Just as we witnessed the gradual embrace of innovations from PCs to mobile internet, the adoption of GenAI is steadily climbing, projected to reach a similar saturation in the next few years. However, the accelerated adoption rate doesn’t overshadow the risks; one prominent threat being the heightened potential for sophisticated phishing attacks due to the advanced capabilities of these AI systems. Addressing this challenge is made even more complex by the current shortage of cybersecurity experts, underscoring the urgent need for organizations to fortify their defenses and prioritize AI security.

As we delve deeper into the intricacies of AI security, we encounter specific threats like prompt injection, data poisoning, and data leakage. Prompt injection exploits the AI’s inability to differentiate between legitimate instructions and deceptive inputs, potentially causing significant data and financial losses. Data poisoning, on the other hand, involves tampering with the data that AI systems learn from, leading to corrupted outputs and misinformation. While data leakage might seem overemphasized as AI systems are more about data generation than storage, the real concern lies in the leakage from associated databases and agents. The escalation of these threats calls for a strategic and proactive approach, encompassing the development of stringent data privacy policies, human oversight in high-risk applications, and adherence to robust ethical frameworks and regulations like GDPR.

As we stand on the brink of widespread GenAI integration, the balance between harnessing its potential and safeguarding against its risks is delicate. The innovation in AI doesn’t discriminate between good and bad intentions, making it imperative for organizations to implement rigorous security measures and constantly evolve their strategies. While laying down policies for data privacy and security is a solid starting point, the true challenge lies in their enforcement. It’s clear that as we navigate this new era, the need for collective discussion and cross-industry collaboration is paramount to ensure we’re not only advancing innovation but also securing the very foundation of AI.

Meet Nightshade, the new tool allowing artists to ‘poison’ AI models with corrupted training data

Venture Beat, October 24, 2023

In the whirlwind of advancements brought forth by generative AI, typified by ChatGPT, an unexpected battlefield has emerged. The bone of contention lies in the training data, the lifeblood of AI models, often sourced from artists who had no say in their work being repurposed for AI training. As a result, a slew of lawsuits against AI entities, including OpenAI, has materialized. Once a benign act, the scraping of web content for data is now viewed with skepticism. The reason is simple: it’s one thing to have your work indexed for search engines, quite another to see it fuel the creation of competing AI-generated art.

Enter Nightshade, a tool poised to tip the scales. Developed by Professor Ben Zhao and his team at the University of Chicago, this open-source marvel is a cloak for digital art. It subtly tweaks pixels in images, imperceptible to us but a spanner in the works for AI. Building on their prior tool, Glaze, Nightshade doesn’t just camouflage; it misguides. Imagine an AI trained on Nightshade-treated images confusing cats for dogs – that’s the level of disruption we’re talking about. This tool doesn’t just poke holes in AI’s understanding of individual images but skews its grasp of related concepts, thanks to AI’s penchant for clustering similar ideas.

Nightshade is more than a clever trick; it’s a statement, a stand for artistic integrity. The researchers aim to shift the power from AI behemoths back to the creators, ensuring respect for copyrights and intellectual property. But like any tool, it’s a double-edged sword. While it offers artists a shield, it also opens up the possibility of misuse and complicates the AI training process. As we stand at this crossroads, the key takeaway is clear: the AI revolution needs to walk hand-in-hand with ethical considerations, ensuring that as we leap forward in technology, we don’t trample over the rights and creativity of individuals who fuel our collective imagination.

AI Bot Taken Down After Allegations Of Racist Stereotypes At Harvard

Free Press Journal, October 25, 2023

In today’s technologically driven society, Artificial Intelligence (AI) is increasingly entwined with our everyday lives, prompting discussions about its potential to replace human roles. However, a concerning incident in the U.S. raises an important question: Can AI also absorb and propagate human prejudices? This was brought to light when an AI model, ClaudineGPT, was found to be exhibiting racist stereotypes, challenging our understanding of AI’s security and safety.

The controversy unfolded when Harvard’s Computer Society AI Group (HCS) released ClaudineGPT and its variant, “Slightly Sassier ClaudineGPT,” on the same day as Harvard President Claudine Gay’s inauguration. The AI Safety Student Team (AISST), committed to reducing risks from advanced AI, accused the bot of utilizing instructions that propagated racist stereotypes. In a critical email, AISST’s communications director, Chinmay M. Deshpande ’24, voiced concerns that the AI model was problematic, potentially reinforcing harmful stereotypes, particularly towards women and people of color.

Responding to the backlash, HCS AI Group clarified that ClaudineGPT was never meant to be a serious representation of President Gay. They emphasized its satirical nature, intended only for entertainment during the inauguration. Despite their intentions, the group subsequently removed ClaudineGPT from accessibility. This incident serves as a stark reminder of the ethical responsibilities that come with AI development. It underscores the need for vigilance in programming to prevent AI from mirroring societal biases, highlighting the importance of prioritizing security and safety in AI, particularly in ethical dimensions.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post