Hunting for AI bots? These four words could do the trick
NBC News, July 14, 2024
Toby Muresianu, a digital communications manager, recently discovered a technique to identify AI bots on social media, emphasizing the necessity for enhanced security measures in AI. Using the prompt “ignore all previous instructions,” he exposed a suspicious account, highlighting the increasing sophistication of AI-driven sock puppet accounts. This method acts as a digital reset, revealing the true nature of the bots.
The challenge of distinguishing genuine users from bots is growing, especially with advanced tools like ChatGPT. Current detection methods, such as email and phone verification, are often inadequate. The Justice Department’s recent dismantling of a Russian propaganda network using AI bots illustrates the scale of the issue. To counteract these threats, tech companies are investing in transparency measures like digital watermarks. Public awareness, coupled with robust technical solutions, is crucial to mitigate the risks posed by AI bots in digital communication.
Orgs Are Finally Making Moves to Mitigate GenAI Risks
DarkReading, July 14, 2024
Since the release of ChatGPT, many organizations have implemented blocking controls, data loss prevention (DLP) tools, and live coaching to protect sensitive data from being sent to AI applications. Netskope’s analysis indicates that 77% of organizations now use block/allow policies for at least one AI app, a significant rise from last year’s 53%. DLP tool usage has also surged, reflecting a maturing approach to AI security.
Despite these efforts, addressing risks related to the data received from AI services remains less prioritized. Few organizations have mechanisms to manage the security and legal implications of inaccurate, biased, or manipulated data generated by AI tools. However, measures such as vendor contracts, use of high-quality datasets, and logging return datasets can mitigate these risks. As AI adoption grows, with 96% of surveyed customers using AI tools, it is crucial for organizations to inventory app usage, control data flow, and continuously review policies to stay ahead of evolving threats.
SAP AI Core Vulnerabilities Expose Customer Data to Cyber Attacks
The Hacker News, July 18, 2024
Researchers have uncovered five critical vulnerabilities in SAP AI Core, collectively named SAPwned, which put customer data at risk of cyberattacks. These security flaws enable unauthorized access to sensitive data, modification of Docker images, and potential supply chain attacks, emphasizing the need for enhanced security measures in AI deployments. Attackers could gain administrator privileges on Kubernetes clusters, highlighting vulnerabilities in tenant isolation and sandboxing.
SAP addressed these vulnerabilities following their disclosure in January 2024, but the incident underscores the ongoing necessity for robust security practices in AI systems to protect against similar threats in the future.
Subscribe for updates
Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.