Artificial intelligence hasn’t evolved enough to be left out of our control
Forbes, May 16, 2022
The more we know about our opponents, the more we are protected and ready to attack. In general, the same is true when it comes to adversarial attacks.
With the help of them, an attacker is able to force a smart system to act as he needs. Such attacks have proliferated in part because most smart systems work based on ML / DL as a key element in many systems – it is with their help that you can make systems work as the attacker needs.
Compared to the AI systems that have been used in the past, the current smart systems have become more predictable due to the aforementioned models and therefore easier to attack. And even though developers may be aware of potential threats, take care of security. in advance is not always realistic. AI systems are built on a set of commonly held ethical characteristics that adversarial attacks fit in quite logically, and attackers use adversarial attacks against ML/DL that will do good to the AI. Third parties will also invent AI For Bad, and in order to fight these AI For Bad systems, we can use hostile attacks ourselves. It remains unclear whether we do more good or more harm by using them.
The most difficult question is whether such adversarial attacks can be used by those who seek to defeat AI for Bad. Despite the fact that this way of fighting is quite unethical, it has its pros and cons, and it is difficult to say something unequivocally. Read more about this in the article at the link.
The Verge, May 18, 2022
Today, artificial intelligence helps attackers to commit a variety of frauds, especially when it comes to creating fake personalities. therefore, there are a number of other smart systems that produce so-called liveliness tests – that is, they detect fake images. However, whether they are really good is another question.
The Sensity firm is engaged in identification of the attacks using the faces generated by artificial intelligence. The company’s specialists investigated the vulnerability of identification tests provided by 10 leading vendors. For the test, deepfakes were generated to be detected by the vendors’ products.
Potential vulnerabilities should not be underestimated, according to the security company, as they pose the greatest threat to banks, where they can easily be exploited for fraud. Vendors that sell liveliness tests to a range of clients, including banks, dating apps, and cryptocurrency startups, don’t seem to take their vulnerability too seriously.
Forbes, May 16, 2022
Today, artificial intelligence has reached a stage of development where people are learning to get the most out of it through good governance, ethics and trust. To understand how to manage a particular tool, you must first understand its key characteristics in the context of business applications.
AI is currently so diverse that each model must be approached individually to achieve the best result – each AI should be considered in the context of how the model is used. Risk mitigation, workforce management, and the development of the technology ecosystem are also important for the effective use of AI.
Determining the boundaries of the solution and documenting what the tool is intended for are also important steps. It is necessary to determine which elements of trust and ethics are relevant and to build clear milestones in each major phase of the project, during which it is assessed whether the model continues to meet ethical expectations. Read more about the issues of effective use of AI in the article.
Subscribe for updates
Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.