Towards trusted AI Week 2 – telling fakes from real ones

Secure AI Weekly admin todayJanuary 17, 2021 43

Background
share close

Lack of action is a recipe for failure, which is why you need to care about the future of AI today


Telling real comments from fake ones is tough sometimes  

Wired, January 15, 2021

Back in October 2019, Idaho considered a possibility of changing its Medicaid program. In order to do that, the approval from the federal government was needed, and as a result public feedback was asked via Medicaid.gov. The topic gathered over 1000 comments, but it is worthy of note that more than a half of them was not written by real people, but generated by artificial intelligence. And the fact that volunteers  could not tell the real comments from the generated ones should put us on guard. Max Weiss, a tech-savvy medical student at Harvard, was the author of the project,  but his project attracted little attention and feedback then. Now, more than a year after the work was performed, the topic of deepfake text manipulation and other AI-based attacks is a burning question.

“The ease with which a bot can generate and submit relevant text that impersonates human speech on government websites is surprising and really important to know,” comments Latanya Sweeney, a professor at Harvard’s Kennedy School who helped Weiss with the ethical side of the experiment.

Why your AI business strategy will fail

eWEEK, January 13, 2021

AI is being introduced everywhere, and at the moment there is a huge number of business projects based on artificial intelligence technologies. However, from the moment the plan is created, not all projects survive to prosperity. Dr. Charla Griffy-Brown, Professor of Information Systems and Technology Management, and Associate Dean of Executive and Part-Time Programs at Pepperdine University’s Graziadio School of Business gave her opinion on why AI business strategies fail. She paid attention to such aspects as poor attention to technical performance, inappropriate data architecture, unexpected behaviour, and, of course, human factor. Still,  Dr. Charla Griffy-Brown states that security aspects are one of the most crucial issues, and this part of the strategy has to be examined and well-thought from all angles as new vulnerabilities and risks never stop. She suggests that companies come up with a risk-based approach at the very beginning of their AI implementation. Collaborating with third-party security specialists also can be considered. 

Prepare for the worst 

Biometric Update, January 11, 2021

According to experts, the new year will bring a number of difficulties in the digital space as organizations of all sizes are going to deal with new threats and some of these are directly related to artificial intelligence technologies. For example, according to the ‘2021 Future of Fraud Forecast’ by Experian, malefactors will increase use of automation leading to “constant automated attacks.”  

Facial recognition is expected to go through challenges as well. There will be a real AI vs AI battle as attackers will create realistic photographs of unexisting people with the help of AI technologies. These will be realistic enough to trick a facial recognition system into taking a fake image for a real person, which will lead to a large number of security incidents. 

Experts conclude that these and other cases will lead to significant financial losses, and now we need new technologies and tools to stand against.

Written by: admin

Rate it
Previous post