Towards trusted AI Week 38 – releasing AI into the wild

Secure AI Weekly admin todaySeptember 20, 2020 37

Background
share close

Smart tech isn’t always as invincible as we want it to be so think twice before blindly relying on it. 


Is AI ready to go out in the wild?

ZDNet, September 18, 2020

AI is not perfect as anything in the world, and that is why it is quite difficult to say when the tech is really ready to be introduced into the world with no following risks: the issue is that sharp as while being implemented widely, AI-based tools can do even physical damage if something goes wrong.  “This is a really good question, and one we are actively working on,” said Sergey Levine, assistant professor of the University of California at Berkeley’s department of electrical engineering and computer science. Levine with his colleagues have introduced an approach to ML-based systems, which is known as conservative Q-Learning. In terms of this approach, all the decisions of an AI tool receive criticism of another smart program. The professor also said that there were methods when a model was taught both offline and on the spot, but both were able to pose certain risks that were to be taken into account.Therefore, there is a possibility that not all the tasks can be automated in the real world by far.    

Fooling a passport scanner turns out to be possible 

Homeland Security Today, September 14, 2020

Facial recognition is widely used nowadays – starting from unlocking personal devices within a glance and using the tech by law institutions. Still, there are currently quite serious social and professional concerns expressed on both ethical and security issues of face recognition. There is still a lot to be perfected both in terms of biases and security vulnerabilities. McAfee’s Advanced Threat Research (ATR) team touched the question if the technology, in particular systems that emulate a passport scanner for identity verification, can be relied on. 

The specialists created a photo  looking like a real person to the human eye to see if it would be identified as someone else by the team’s facial recognition algorithm. These actions allowed to fool the system and make it indicate a photo as the wrong person.

The researchers concluded that the purpose of the experiment was not to slander facial recognition technology, but to demonstrate that automated systems should be regularly checked on vulnerabilities otherwise these systems can be easily used by malefactors to bypass passport or any other critical  identification.

See if your company really needs to implement AI strategies

Dynamic Business, September 18, 2020

While many companies are currently rushing to incorporate AI technologies into their business systems, it is better to stop and think first whether it is the right time and place to do this. 

The CSIRO describes AI as “a collection of interrelated technologies used to solve problems autonomously and perform tasks to achieve defined objectives, in some cases without explicit guidance from a human being,” which can include ML, robotics, computer vision, human language technologies, knowledge representation and other. Federal and state organizations have been introducing AI in many business processes, including the ones that imply working with large amounts of data. While the new tech makes easier a number of business tasks, it also presents certain risks, including the security ones.This actually doesn’t mean that AI should not be incorporated at all: if a company is considering adopting AI, it should think of a clear governance structure on how the tech will be implemented both in terms of ethics and security. “The reality is that AI is evolving rapidly. Australian businesses are experiencing shortages of talent that can understand, develop and implement these technologies. As outlined in the strategy, a collaboration between industry and academia is one way to address this challenge, and should certainly be encouraged,” commented Jeff Olson, Head of Applied AI & Analytics ANZ at Cognizant.

Written by: admin

Rate it
Previous post