Towards trusted AI Week 1 – top hacks 2020

Secure AI Weekly admin todayJanuary 10, 2021 42

Background
share close

A new generation of experts is coming: better performance requires a good understanding of AI


 Top 2020 hacks including the AI one

Dark Reading, December 31, 2020

The end of the year was the best time to create the list of the most sophisticated and unusual attacks to happen in 2020. Not without an attack on the artificial intelligence system, namely the Tesla autonomous vehicle. Fortunately, the attack was not carried out by cybercriminals, but by researchers from McAfee who managed to trick the older-model cars. The attack was performed on the models having Mobileye version EyeQ3 cameras. The attack is possible in case the vehicle is on the traffic-aware cruise control mode. The worst about the attack is that the hacker may try to drive the vehicle off the road which can lead to accidents. Still,  what is good, the researchers  didn’t manage to fool the latest version of the camera by this attack. Also, the latest Tesla cars even don’t use Mobileye cameras. recognition).

Introducing AI education to a school program

ZDNet, January 8, 2021

As the importance of artificial intelligence technology continues to grow, AI experts are betting on the young generation. As AI is believed to bring 10% increase to the UK’s GDP in the next ten years, the AI Council released a set of recommendations for the educational industry in order to integrate the education for artificial intelligence to the school program so that every child has a basic understanding of AI when he or she finishes school.   Besides being added to other subjects, AI has to be taught on its own. As a result, students should understand both the potential of smart technologies as the risks they pose. 

“Without basic literacy in AI specifically, the UK will miss out on opportunities created by AI applications, and will be vulnerable to poor consumer and public decision-making, and the dangers of over-persuasive hype or misplaced fear,” says the report of the AI Council.

AI vs AI: increasing system resilience 

SingularityHub, January 5, 2021

Despite the fact that the past year has not been particularly fruitful for many industries, the world of artificial intelligence has made significant headway. There is still a lot to be done in 2021, and the article describes four areas that still need to be worked on. Among them, there is the issue of defending insidious attacks targeting input data of AI systems.  Slight  perturbations that take place is a dataset often get undetected by people, but have potential to affect the final output significantly. So-called “adversarial attacks” become a burning issue when malefactors can alter decision-making process in an AI system just  with the help of several inputs. In some industries, such attacks can even lead to disastrous consequences. Recently, a team of the University of Illinois has presented a new way to increase the resilience and trustworthiness of deep learning systems. The study is based on an iterative approach where two neural networks battle: either for image recognition, or generating adversarial attacks.

Written by: admin

Rate it
Previous post