Towards Trusted AI Week 23 – Adversarial Attacks to prevent spying? Why not!

Secure AI Weekly admin todayJune 8, 2022 166

Background
share close

Even artificial intelligence can be fooled with the help of another artificial intelligence


Adopting MLSecOps for secure machine learning at scale

VentureBeat, June 3, 2022

Given the scale and complexity of any enterprise’s software stack, security always existed, exists, will be a major concern for most IT teams. Now, in addition to the already-known security issues DevOps teams face, it is imperative to consider the ones of machine learning (ML).

In all areas, the use of machine learning is spreading. By the end of 2021, more than half of enterprises have implemented ML in their business processes. Along with integration, most enterprises deal with challenges while leveraging and deploying ML, as well as the machine learning model itself. This is true particularly in more recent contexts where ML is deployed at scale for use cases including critical data and the whole infrastructure.

More challenging is the security of ML when applied in the real enterprise environment. If a security breach takes place, it can lead to serious failures. Nevertheless, to avoid downtime or bottlenecks, ML needs to be adopted for IT teams.

How to solve these issues? How to align ML, automation between developers and operations teams with a security policy? How to combine ML, DevOps and security? For these purposes, professionals create a relatively new specialization of machine learning security operations – MLSecOps.

Read more about the new specialization in the article.

Is technology spying on you? New AI could prevent eavesdropping

Science, May 31 2022

Companies eavesdrop on employees near their workplace and computers, some apps can record voices during a call, some home devices can record our daily conversations. 

A new Neural Voice Camouflage technology, which can generate noise in the background as somebody talks, confuses the artificial intelligence (AI) involved in transcribing received recordings.

For the study, the researchers trained a machine learning system on many hours of speech recording. In that way, they taught the neural network to process two-second segments of the recording and to hide what was supposed to be said next. The sounds were set up so that the AI ​​took them for something else. In other words, they tried to fool AI with the help of another AI.

However, the researchers successfully tested that method in the real world – they played a voice recording simultaneously with a cover over speakers in the same room.

This is a perfect example of how insecurity of AI and adversarial attacks can be used for good.

Mia Chiquier, a computer scientist at Columbia University who led the research, said it was the first step in safeguarding privacy in the face of AI. “Artificial intelligence collects data about our voice, our faces, and our actions. We need a new generation of technology that respects our privacy.

Read more about the new technology following the link.

Researcher Says an Image Generating AI Invented Its Own Language

Futurism, June 2, 2022

Not so long ago, to be more precise, at the beginning of this year, the DALL-E2 surprised a lot with its uncanny ability to turn text prompts into photorealistic or even artistic images. And now it is supposed to be more powerful and unbeknown than everyone thought before.

Giannis Daras, a computer science PhD student at the University of Texas at Austin, said that OpenAI’s advanced text-to-image artificial intelligence system called DALL-E2 appears to have created its own written language. Daras wrote, “DALLE-2 has a secret language…DALLE-2 language detection creates many interesting security and interpretability issues“.

Noteworthy is the fact that people scare them at first. This new language demonstrates that the new AI is not only vulnerable to adversarial attacks and can be easily tricked but it made the process of identifying vulnerabilities easier for an inexperienced user. Previously adversarial examples were considered techniques that required some math knowledge. However, now even kids can simply  brute force AI to figure out details of this new language and then use it to mislead AI. 

Read more about the new language in the article following the link.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post