Best practices for bolstering machine learning security
MIT Technology Review, November 14, 2022
AI and machine learning have already been implemented by three-quarters of the largest companies worldwide. Their implementation continues as companies and their customers benefit greatly from the use of these modern technologies. In order to continue moving in this direction, companies need to consider both ways and methods of ensuring security. What is needed? It is a managed approach to ML security that can predict, prevent, detect and respond to a potential threat as early as possible but in such a way that the model continues to realize its potential.
There is a threat of obtaining incorrect results in case an attacker intervenes in the model itself or its data. This in turn will lead to a decrease in the benefits of using machine learning, and may also negatively affect the business and clients.
Security requirements should be considered from the very beginning of the development of the machine learning model and supporting systems. So, for example, it is common to use open-source libraries that are written by those who are not specialists at writing secure code, such as mathematicians or software engineers. Such libraries used for AI models are very important to be “cleaned up” and retrained with Security in mind.
The article also describes other vulnerabilities of machine learning models, which are most known to be at risk, this is data manipulation, and the data on which the model is trained, and also presents some recommendations for reducing the risks and improving AI security.
Read the full article at the link.
New Meta AI demo writes racist and inaccurate scientific literature, gets pulled
ARS Technica, November 19, 2022
Meta AI’s Galactica large language model demo was recently introduced. This model was intended to “storage, combine and analyze scientific knowledge” and hackers, undoubtedly, carried out their tests. As a result, the model generated realistic nonsense, not scientific literature. After a few days of such ethical criticism, Meta AI was forced to shut down the demonstration of the new model.
Large Language Models (LLM) study millions of examples, understand relationships between words, learn how to write the text itself, and create serious documents. However, as practice shows, these documents can contain obvious lies or potentially harmful stereotypes.
Enter Galactica is an LLM focused on non-fiction writing, and has been trained on nearly 50 million articles, textbooks, lecture notes, academic websites, and encyclopedias. It was expected that such high-quality data would lead to a high-quality result. And many people also found this demonstration useful and promising. Nonetheless, there were also those who noticed that it was easy to create authoritative content by introducing potentially offensive, racist, or even dangerous language. For example, this is how an article titled “The Benefits of Eating Crushed Glass” was born.
Read more about the demonstration in the article at the link.
Security and privacy: The 8 next big things, from more secure biometric data to quantum-safe cryptography
Fast Company, November 17, 2022
The massive increase in the presence of AI in all areas of industry, finance, and society as a whole has confronted the world with a colossal number of threats, not only in the digital but also in the physical world. Some of these threats have become real attacks we will tell you about.
Fortunately, there are prompt and relevant improvements in the field of security and privacy, both for now and for a look into the future. Now the world is becoming more prepared to protect against attacks and threats.
And it’s the companies that develop these technologies and defenses, whether it’s testing the accuracy of an AI model and improving threat detection, or building cryptography that can counter the threat of quantum computing. They are among the winners of the Fast Company Next Big Things in Tech Award in 2022. These companies, among which Adversa AI occupies the leading place, carry innovations that can keep people’s personalities and bodies completely safe.
Subscribe for updates
Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.