Top Security-related AI incidents in 2021

Articles admin todayFebruary 18, 2022 862

Background
share close

The trustworthiness of smart systems is made up of a number of parameters, such as security, ethics and responsibility, and we have already published a detailed article on it. Despite the fact that over the years artificial intelligence has become more and more advanced and its protective mechanisms are also developing, smart technologies in this matter are still far from perfect, which can be easily demonstrated by the fact that AI incidents still take place quite regularly.

 

Subscribe for the latest LLM Security news: Jailbreaks, Attacks, CISO guides, VC Reviews and more

     

    Each year brings new lessons, and today, when 2021 is already over, let’s look at 7 noticeable untrustworthy AI incidents and attacks on artificial intelligence of the past year, which have proved that artificial intelligence has to do a lot of work for us to truly trust it. Last year, we saw examples of AI incidents in various AI applications from image recognition to voice assistance, price prediction and other areas, all of them led to different consequences from huge monetary losses to reputation losses and even potential death.

     

    AI Incidents with facial recognition AI

    Facial recognition systems are a technology that has gained popularity all over the world in recent years. It is increasingly being introduced into various services, including financial ones, and it is not surprising that it is here that they will most willingly want to deceive it. In China, where the case of interest took place, invoices issued by the State Tax Administration are used to track payments and help fight tax evasion.

    Shanghai officials said that a group of malefactors scammed a government-run facial recognition system. This used personal information and photographs that were bought on the Darknet. Attackers managed to fake tax invoices and make 500mil yuan. According to the Shanghai prosecutors, two criminals with Wu and Zhou surnames were prosecuted for the crime. Such a large-scale attack was successful by the criminals by creating a video based on illegally purchased photos, in which it seemed that the faces could reproduce various facial expressions.

     

    AI Incidents in social media 

    One day, Daniel Higgins, a musician known to his fans under the pseudonym D.Higgs, was unable to log into his Instagram account, after which it was discovered that his password had been changed. However, the man was more shocked when he saw on his page a post published on his behalf: the video showed him himself (more precisely, someone with a face identical to his own) ordered all his Instagram followers to buy bitcoins. “I just invested $300 into Bitcoin and got $10,000 back. Gotta try it,” the man with Higgins’ face said. 

    The musician’s subscribers were completely at a loss, as was the owner of the account. So far, such attacks are not often heard, but they may become very popular in the future, and are especially dangerous, since many people have not yet learned to distinguish deepfake videos from real ones.

     

    AI Incidents with price prediction 

    However, perhaps one of the largest in terms of financial losses, the case, which was caused by artificial intelligence inconsistently in 2021, was the so-called Zillow-pocalypse in the media. In short: Zillow lost $381 million after the data science model got out of hand.

    In the fall, the company said it would have to lay off 25% of its workforce and also needed to sell 7,000 homes with a total asset value in excess of the $2.8 billion it had previously acquired. The whole truth is that routine machine learning models can lead to extremely serious consequences if mismanaged. We can go into great detail, but the main lesson for the company is not to try to use machine learning models as a way to solve all problems, since solutions that work in one environment do not work well in others, and finding good solutions requires testing, learning, and flexibility.

     

    AI Incidents with image classification 

    However, sometimes, in order to act unethically, smart technologies do not have to be in the hands of an attacker – sometimes embarrassing incidents happen by themselves. Representatives of the popular social networking website Facebook have been forced to apologize for a case in which artificial intelligence software mislabeled black people as “primates” in a video. This is a video titled “white man calls cops on black men at marina,” in which black men clash with white police officers and civilians. 

    Unfortunately, various artificial intelligence errors affecting racial aspects are still not uncommon, and since the topic is burning, their timely elimination is especially important.

     

    AI Incidents with recommendation systems 

    Another hot topic for modern AI and an area of ​​regular errors is gender issues. 

    For example, systematic bias against women can be found in many smart technology programs. In the past year, it has been observed that the most popular Chinese search engines, including Baidu, Sogou and 360, tend to return mostly images of men in response to words such as “engineer”, “CEO” or “scientist”. In contrast, the words “woman” and “femininity” are often accompanied by references to sexual content. The information was published in September 2021 by the Mana Data Foundation, a Shanghai-based public welfare foundation, and UN Women. 

    It is unlikely that artificial intelligence itself has a sexist worldview, but such shortcomings in the work of search engines based on artificial intelligence technologies can again offend users. Unfortunately, such situations can not always be predicted in advance, so sometimes they make themselves felt only after some time.

    AI Incidents with audio deepfakes 

    Unfortunately, this was not the only case in the topic of artificial intelligence in which attackers snatched a particularly large amount of money that is attention-worthy. In fact, the case that I would like to mention happened as early as 2020, but it received wide publicity only a year later.

    Back in early 2020, a bank manager in Hong Kong received a call – the voice of the company director was on the phone. The manager heard the good news: as the company was about to make an acquisition, a series of $35 million worth of transfers needed to be authorized. After that, lawyer Martin Zelner was hired to carry out the procedures, from whom the bank manager even received a number of emails confirming the details of the operation. Since everything looked as plausible as possible, the bank manager began to make the requested transfers.

    But in fact, what the bank employee encountered was nothing more than a phishing attack, which reached a new level with the help of deepfake technologies. The case became the second well-known in the park, in which such technologies were actively used. The case deserves special attention, as the growth in the number of detailed attacks is expected to increase in the future.

     

    AI Incidents with audio applications 

    Despite the fact that artificial intelligence errors associated with incorrect determination of gender or race are highly unethical, they are still unable to cause physical harm. What can not be said about the next case.

    At the end of last year, the scandalous incident that occurred with Alexa’s smart speaker blew up the headlines. When a ten-year-old girl asked Alexa for “a task to do”, she replied “Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.” What would happen if the child really followed the advice of the smart speaker  is even scary to imagine.

    Of course, after what happened, Amazon has immediately updated its Alexa voice assistant. Fortunately, such errors of artificial intelligence are not too frequent, but doing everything to prevent them in advance is extremely necessary from the point of view of the safety of life and health.

     

    Artificial intelligence is  wrong from time to time, it gets hacked and unethically used – but this is not something to be afraid of. The main thing we can do based on the mistakes of the past is to try to eliminate them and not repeat them in the future.

     

    LLM Red Teaming Platform

    Are you sure your models are secured?

    Let's try!
    Background

     

    Subscribe for updates

    Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

      Written by: admin

      Rate it
      Previous post