Adversa at DEFCON AI Village 2021
On August 5, CTO at Adversa Eugene Neelou delivered a talk at DEF CON AI Village to show the whole picture of ML and its security in today’s realities. The ...
Secure AI Weekly admin todayAugust 9, 2021 92
In order to test smart systems for strength, sometimes you need the help of a large number of people
The Register, August 2, 2021
Twitter’s bug bounty competition is available through HackerOne, where you can help to improve the updated saliency model.
Twitter’s automated image cropping tool, also known as a saliency algorithm, subject to bias. In particular, the algorithm has problems with gender and race classification. Nevertheless, the social network is aware of the existing problem and wants to cope with it by holding the industry’s first algorithmic bias bounty competition.
The developers of Twitter believe that it is all about the machine learning code used, which crops images around the first spot eyes and can give similar errors.
The company has now shared an updated saliency model and its code, and encouraged volunteers to participate in a competition to test if the new model works correctly. As part of DEF CON AI Village, the competition is available through HackerOne. The reward for each of the five prizes ranges from US $ 500 to US $ 3500.
Biometric Update, August 2, 2021
Three strategic security objectives for Artificial Intelligence and Machine Learning have been published by the US government: the plan is currently being prepared for implementation.
Their plan covering the deployment of biometric systems was created by the Science and Technology Directorate (S&T) of the Homeland Security Department. The essence of the goals extends both to protection from AI in the hands of third parties and to their application in order to protect the nation.
One goal is connected to the development of new AI and ML technologies suitable for the use across the department. The second goal underlines the importance of interdisciplinary training for government workers. The third goal is to encourage the use of the resulting products to secure the nation.
The Register, August 6, 2021
Researchers have detailed how AI systems can be attacked by using text containing unseen Unicode characters, which can trick the system into making the wrong decisions.
It is known that unfortunately even such respected software built by Microsoft, Google, IBM can be tricked into using Unicode. If the ML software does not take into account certain invisible Unicode characters, this can cause ambiguities or errors, and what the user sees on the screen or on the printout will differ from what the neural network perceives when making certain decisions.
Such tricks can be used by cybercriminals for a variety of malicious purposes. A similar vulnerability was described by researchers at the University of Cambridge in England, and the University of Toronto in Canada, who released a paper on this topic on arXiv In June this year.
Written by: admin
Company News admin
On August 5, CTO at Adversa Eugene Neelou delivered a talk at DEF CON AI Village to show the whole picture of ML and its security in today’s realities. The ...
Adversa AI, Trustworthy AI Research & Advisory