Towards trusted AI Week 1 – top hacks 2020
A new generation of experts is coming: better performance requires a good understanding of AI Top 2020 hacks including the AI one Dark Reading, December 31, 2020 The end of ...
Secure AI Weekly admin todayJanuary 17, 2021 49
Lack of action is a recipe for failure, which is why you need to care about the future of AI today
Wired, January 15, 2021
Back in October 2019, Idaho considered a possibility of changing its Medicaid program. In order to do that, the approval from the federal government was needed, and as a result public feedback was asked via Medicaid.gov. The topic gathered over 1000 comments, but it is worthy of note that more than a half of them was not written by real people, but generated by artificial intelligence. And the fact that volunteers could not tell the real comments from the generated ones should put us on guard. Max Weiss, a tech-savvy medical student at Harvard, was the author of the project, but his project attracted little attention and feedback then. Now, more than a year after the work was performed, the topic of deepfake text manipulation and other AI-based attacks is a burning question.
“The ease with which a bot can generate and submit relevant text that impersonates human speech on government websites is surprising and really important to know,” comments Latanya Sweeney, a professor at Harvard’s Kennedy School who helped Weiss with the ethical side of the experiment.
eWEEK, January 13, 2021
AI is being introduced everywhere, and at the moment there is a huge number of business projects based on artificial intelligence technologies. However, from the moment the plan is created, not all projects survive to prosperity. Dr. Charla Griffy-Brown, Professor of Information Systems and Technology Management, and Associate Dean of Executive and Part-Time Programs at Pepperdine University’s Graziadio School of Business gave her opinion on why AI business strategies fail. She paid attention to such aspects as poor attention to technical performance, inappropriate data architecture, unexpected behaviour, and, of course, human factor. Still, Dr. Charla Griffy-Brown states that security aspects are one of the most crucial issues, and this part of the strategy has to be examined and well-thought from all angles as new vulnerabilities and risks never stop. She suggests that companies come up with a risk-based approach at the very beginning of their AI implementation. Collaborating with third-party security specialists also can be considered.
Biometric Update, January 11, 2021
According to experts, the new year will bring a number of difficulties in the digital space as organizations of all sizes are going to deal with new threats and some of these are directly related to artificial intelligence technologies. For example, according to the ‘2021 Future of Fraud Forecast’ by Experian, malefactors will increase use of automation leading to “constant automated attacks.”
Facial recognition is expected to go through challenges as well. There will be a real AI vs AI battle as attackers will create realistic photographs of unexisting people with the help of AI technologies. These will be realistic enough to trick a facial recognition system into taking a fake image for a real person, which will lead to a large number of security incidents.
Experts conclude that these and other cases will lead to significant financial losses, and now we need new technologies and tools to stand against.
Written by: admin
Secure AI Weekly admin
A new generation of experts is coming: better performance requires a good understanding of AI Top 2020 hacks including the AI one Dark Reading, December 31, 2020 The end of ...
Adversa AI, Trustworthy AI Research & Advisory