Towards trusted AI Week 44 – concerns about AI are still here

Secure AI Weekly admin todayNovember 1, 2020 62

Background
share close

Fearing smart technologies for no adequate reason is pointless; still the existing risks are certainly have to be taken into consideration.


AI fear takes molehills for mountains

Forbes, October 29, 2020

The possible threats of AI cannot be just eliminated from our minds, but some concerns about the possible risks that smart systems can introduce to society seem to be definitely overestimated.  34% of Americans who took part in a recent survey conducted by Oxford University’s Center for the Governance of AI said that AI would harm humanity in some way; at the same time 12% claimed that AI development would have some critically bad  consequences, such as human extinction. Also, based on the results of the survey, the support expressed towards the development of smart tech is stronger among the wealthy, educated, male, or tech people, which can be associated with the fact that the more people understand and use AI themselves, the more comfortable they feel about it.

Researchers address to an issue of sentence-level attacks against text classifiers

VentureBeat, October 27, 2020

 AI-based text classifying systems are used in a variety of applications, especially the ones for work with documents. With their help, it is possible to edit and standardize various business information. In a recent paper, MIT researchers have turned to an issue of sentence-level attacks against text classifiers. In such an attack, a malefactor changes a sentence in such a way that it triggers  misclassification while the literal meaning of the text remains the same. With the help of a framework introduced by researchers, conditional BERT sampling (CBS), it is possible to feed sentences from an AI language model to RewritingSampler. The last one is an instance of CBS able to rewrite the sentences in order to attack classifiers. According to the researchers, CBS and RewritingSampler achieved a better attack success rate during the experiments than other existing methods.

Majority of SOCs use AI tools to detect advanced threats

Security Magazine, October 30, 2020

In 2020 State of Security Operations report released by Micro Focus in collaboration with CyberEdge Group, researchers highlight that security operations centers (SOCs) all around the world are most concerned with advanced threat detection and are expecting smart technologies to safeguard the companies in the future. ACcording to the report, more than 93% use AI technologies aiming to improve advanced threat detection capabilities; more than 89% are going to use a Security Orchestration and Automated Response (SOAR) tool in 2021. The results demonstrate that as SOCs continue to develop, next-generation tools will be widely used to deal with security issues. “The odds are stacked against today’s SOCs: more data, more sophisticated attacks, and larger surface areas to monitor. However, when properly implemented, AI technologies such as unsupervised machine learning, are helping to fuel next-generation security operations, as evidenced by this year’s report,” commented Stephan Jou, CTO Interset at Micro Focus.

With the rising volume of threats,  90% of companies rely on the MITRE ATT&K framework for understanding attack techniques.

Written by: admin

Rate it
Previous post