Towards Trusted AI Week 32 – Do you use AI in your enterprise? Is it secure?

Secure AI Weekly admin todayAugust 9, 2022 303

Background
share close

Large Language AI Models Have Real Security Benefits

DarkReading, August 3, 2022

We usually don’t cover how AI is helping in security but this one is curious and optimistic.

The third version of the Generative Pre-trained Transformer, better known as GPT-3, is a large neural network built from wide training using voluminous datasets. Researchers have found that it provides benefits for cybersecurity applications such as natural language-based threat detection, easier categorization of inappropriate content, and clearer explanations of complex or obfuscated malware.

Two researchers at cybersecurity firm Sophos found that GPT-3 technology can translate natural language queries into requests to a security information and event management (SIEM) system. GPT-3 is also good at taking a small number of website classification examples and then using them to classify other sites, finding commonalities among criminal sites or exploit forums.

The research, which will be presented at the Black Hat USA conference this week, is the latest application of GPT-3. It demonstrates the model’s amazing performance in translating natural language queries into machine instructions, code, and images. In doing so, the researchers are focusing on the technology’s usefulness in facilitating the work of cybersecurity analysts and malware researchers.

Artificial intelligence isn’t that intelligent

The Strategist, ASPI, August 4, 2022

The Defense Department’s Science and Technology Group convened the First Australian Defense Science, Technology and Research (ADSTAR) Summit last month. It brought together Australia’s leading scientists, researchers and businesspeople, and representatives from each of the Five Eyes partners, as well as from Japan, Singapore, and Southern Korea. The two streams dedicated to artificial intelligence were devoted to research and applications in the field of defense.

What is AI? And can it be trusted?

Machine learning models are sophisticated implementations of statistical methods and not some all-powerful ability that, despite being able to imitate humans, also thinks like humans. This makes them not so smart but complex and opaque, which leads to problems in AI safety and security.

Everyone knows that biases in AI cause problems that are often already known and are being explored by researchers, practitioners, and even politicians. However, AI security is different: AI safety is concerned with the impact of the decisions an AI can make, AI security looks at the inherent characteristics of a model and its usability. Artificial intelligence systems are certainly vulnerable to intruders and adversaries just like cyber systems. One of known problems is adversarial machine learning, where “adversarial perturbations” added to an image cause the model to predictably misclassify it.

You already saw in our digests that many countries such as US, EU, UK, and currently Australia, are publishing their AI strategies to address these challenges. It’s time to make use of the insights learned from cyberspace to AI:

  • investing in AI security and protection at the same pace as investing in AI implementation;
  • commercial solutions for AI security, assurance and audit;
  • legal system regarding AI safety and security requirements, as it is done for cybersecurity;
  • deeper comprehension of AI and its limitations, as well as technologies such as machine learning that underlie it.

AI Models under Attack. Conventional Controls are not Enough

Gartner, August 5, 2022

Attacks on AI are not tomorrow’s problem, actually it’s a yesterday’s problem, according to Gartner.

Gartner conducted research and found that more than 40 percent of organizations experienced AI privacy breaches or security incidents, and more than 25 percent were malicious attacks on the organization’s AI infrastructure.

These results once again confirm the existence of a serious problem, especially considering some breaches or incidents staying unnoticed.

The spread of attacks is growing rapidly across the board as AI spreads. Based on the latest Gartner survey of AI adoption, more than 70% of enterprises use hundreds or thousands of AI models. AI can be transformative but poses great risks requiring new forms of AI Trust Risk and Security Management (AI TRiSM).

The Gartner survey has also found that organizations collaborating across departments to implement AI TRiSM deploy more AI models in production and get more value from them than organizations that do not. The AI ​​TRiSM methodology and tools are a precondition for establishing KPIs and measurements. Managing AI without visibility and direction is unacceptable.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post