Towards trusted AI Week 52 – people and AI should collaborate

Secure AI Weekly admin todayDecember 27, 2020 46

Background
share close

For best results, humans and AI must come together and complement each other


Higher benefits lowers risks 

CGTN, December 21, 2020

Now it is already difficult to imagine our life without artificial intelligence technologies. During the quarantine period, this became especially evident: for example, predictive capabilities have become increasingly important. While the technology has demonstrated its benefits in the fight against the pandemic, it has also shown its vulnerability. In addition, other difficulties of artificial intelligence have become apparent: attackers are using increasingly sophisticated attacks using AI, and unethical use of smart technologies can lead to a number of very different problems.

The existing methods of regulating and eliminating vulnerabilities in AI are currently insufficient enough, which means that we face the need of new ways of cooperation and governance in the sphere, the ones that provide transparency, privacy, security and safety  of AI use. While there is already a number of existing initiatives, such as the one by  the OECD (AI) Policy Observatory and the global Partnership on AI (PAI), there is still a lot to do in the future. 

Team up with AI to fight cybercriminals

TechRepublic, December 24, 2020

Artificial intelligence technologies are not on anyone’s side: they can make our lives much easier, but they can also be successfully used by hackers. However, it is within our power to make artificial intelligence technologies our assistant in the fight against malefactors: smart technologies and people just have to collaborate in a proper way. AI can make life a lot easier for security professionals by automating processes that require constant attention. For example, smart technologies can be trained to detect threats or suspicious behavior. On the other hand, human brain is much better than artificial intelligence when it comes to judgment-based decisions. 

At the same time we should not forget that сybercriminals are not lagging behind: many of them have already learned how to turn AI into their weapons, and some can even hack other people’s smart systems, making changes in the work of algorithms. But the saddest thing is that cybercriminals have already learned how to make AI-based programs that can look for vulnerabilities in systems.

The confrontation between security specialists and hackers will not end, but now is the time to combine our efforts with artificial intelligence technologies to create a multi-layered and truly effective protection.

Immersive tech, AI and privacy risks

JD Supra, December 16, 2020

It is expected that immersive technologies will become more popular in the next five years. In many cases, immersive technologies turn out to be closely related to artificial intelligence and machine learning technologies, which can also pose a number of risks.  Artificial intelligence technologies play a huge role in the development of immersive technologies, making interaction with users easier, improving content quality, and providing better adaptability to user input. However, this implies a number of risks: these are possible biases and data collection and processing, which is a threat to data privacy itself. The use of personal information of users has already been regulated by several privacy laws, such as the General Data Protection Regulation (GDPR) in the European Union (EU) and the California Consumer Privacy Act (CCPA), the California Privacy Rights Act (CPRA), and the Illinois Biometric Information Privacy Act (BIPA)

 in the United  States: these raise concerns about the use of immersive technologies that implement ML.

Written by: admin

Rate it
Previous post