Towards Secure AI Week 40 – What You Need to Know About the Risks

Secure AI Weekly + Trusted AI Blog admin todayOctober 9, 2024 12

Background
share close

California Governor Vetoes AI Regulation Bill, Calls for More Targeted Approach

Campus Technology, September 30, 2024

California Governor Gavin Newsom has vetoed Senate Bill 1047, a proposed law designed to regulate AI and prevent its misuse. The bill, which had received strong legislative support, aimed to establish some of the first safety measures for AI development in the U.S. However, Newsom expressed concerns that the bill’s broad approach failed to differentiate between AI systems that pose significant risks and those that do not. He argued that applying stringent regulations to all AI systems, regardless of their function or risk level, could stifle innovation without effectively protecting the public from the real threats posed by advanced AI technologies. Newsom called for a more focused and targeted approach, highlighting his ongoing work with AI experts, like Stanford’s Fei-Fei Li, to develop guidelines based on science and evidence to ensure safety while fostering innovation.

The veto has sparked mixed reactions across the tech industry, academia, and political circles. Companies like Google and OpenAI supported Newsom’s decision, praising his efforts to keep California at the forefront of AI innovation while encouraging collaboration on appropriate safeguards. However, critics, including organizations like the Mozilla Foundation, warned that the bill’s rejection could accelerate the concentration of AI power within a few large tech companies, making AI less open and secure. Legislators and activists pushing for more oversight were disappointed, with State Senator Scott Wiener, the bill’s author, calling the veto a missed opportunity to regulate AI responsibly. He emphasized the need for enforceable regulations, arguing that voluntary commitments from the industry are often insufficient to protect the public from rapidly advancing AI risks.

Google’s Gmail Update Sparks ‘Significant Risk’ Warning for Millions of Users.. What You Need to Know Now!

News Faharas, September 29, 2024

Google’s recent update to Gmail, which introduces AI-powered tools like Gemini’s Smart Replies, has raised serious security concerns for millions of users. While these AI enhancements provide more contextually relevant email responses and improve functionality, they also expose users to significant risks. One of the primary concerns is the potential for prompt injection attacks, where hackers manipulate the AI by embedding malicious instructions into seemingly innocent emails. These attacks could lead to unintended actions by the AI, such as sharing sensitive information or enabling phishing attempts. As AI takes on more email-related tasks, users must be aware of these vulnerabilities and exercise caution when engaging with AI-generated content.

In response to these risks, Google has acknowledged the security challenges associated with its AI features and is actively working to strengthen its defenses. The company has implemented several safeguards and continues to refine its security measures through ongoing testing and updates. However, despite these efforts, users must remain vigilant, especially when interacting with AI-generated replies or clicking on links suggested by the AI. Staying informed about potential threats and practicing safe email habits will be essential in navigating the risks introduced by these new AI tools in Gmail.

How you can protect large language models from data poisoning

ITNews, October 1, 2024

As businesses across Asia embrace rapid digital transformation, the region has become a hotspot for both innovation and cybersecurity threats. In particular, the adoption of large language models (LLMs) within enterprises and communication service providers (CSPs) has created new opportunities for enhancing security operations. These AI-powered systems can mimic human conversations, tackle complex questions, and improve incident detection. However, with these advancements come significant risks, notably data poisoning, where attackers corrupt the data used to train AI models. This malicious manipulation can lead to LLMs generating misleading, harmful, or biased outputs, posing severe risks to businesses.

Protecting LLMs from data poisoning requires a multi-faceted approach. Organizations must implement strong data validation techniques, such as using curated, human-verified data, anomaly detection, and negative testing, to identify and filter out poisoned inputs. Securing data storage, enforcing stringent authentication, and conducting continuous risk assessments are crucial for preventing breaches. AI-specific defenses like adversarial training can further reinforce LLMs against emerging cyber threats. By adopting these strategies, businesses in Asia can leverage the full potential of AI while safeguarding against cyberattacks, ensuring that their AI systems remain secure, reliable, and tamper-proof.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post