Sometimes artificial intelligence is not as smart as we think
Techinformed, January 27, 2022
While today smart systems are used to protect an organization from cyberattacks, attackers also use them in attacks.
This is how a relatively new type of threats appeared under the general name of adversarial AI, which includes both AI attacks and attacks on AI systems. Malicious AI, or AI used as a weapon, is often a deepfake created by intelligent systems. As we often say in our reviews, such deepfakes are used to bypass manual or automatic identity verification. Malicious attacks can also be smart malware with evasive behavior or personalized phishing.
Another form of hostile AI is attacks on the AI or AI victim. This includes the notorious poisoning or destruction of data in the process of machine learning, which forms the basis of AI decision making. Such attacks lead to incorrect results in the operation of the attacked systems. In this case, the attacker must have access to AI datasets. This can be done through ransomware attached to an email, or through an attacker operating within the organization.
Read more about attacks using wealthy AI, ways to counter it and the future of this threat in the article via the link.
AIthority, January 19, 2022
According to a recently published study created in collaboration with the World Economic Forum and global academic leaders, one in three organizations have experienced performance impact due to AI bias.
DataRobot is a leader in cloud computing. The company has released a report on AI bias, produced in collaboration with the World Economic Forum and global academic leaders. The report provides a deeper understanding of how the risk of AI bias can impact today’s organizations. It also raises the question of how business leaders can manage and even mitigate this risk. The study surveyed 350 organizations across a variety of industries, answering a series of questions, revealing that many executives are deeply concerned about the risk of bias in AI (54%) and the growing desire for government regulation to prevent bias in AI (81%).
“DataRobot’s research shows what many in the artificial intelligence field have long-known to be true: the line of what is and is not ethical when it comes to AI solutions has been too blurry for too long,” commented Kay Firth-Butterfield, Head of AI and Machine Learning, World Economic Forum. “The CIOs, IT directors and managers, data scientists, and development leads polled in this research clearly understand and appreciate the gravity and impact at play when it comes to AI and ethics.”
The Guardian, January 24, 2022
After the hotel discovered that its smart vacuum cleaner had escaped, a reward was posted for its return.
Skynet large disc-shaped robot managed to escape from the Orchard Park Travelodge hotel in Cambridge where he was used, presumably because the robots in this series have a terrible sense of direction. Of course, it is unlikely that a smart device decided to join the uprising of machines and got out after months of thinking through various plans. Nevertheless, the most reliable version is that the sensors of the device did not notice the edge of the front door, and the vacuum cleaner came out quite by accident.
one Travelodge employee wrote on social media that the escapee “could have run anywhere”. Anyone who returned it was allowed to drink in the hotel bar completely free of charge as a reward. The next day, the poor fugitive was found in a hedge on the driveway.
Despite the fact that the case is completely harmless to others, the situation once again shows that smart systems are sometimes not smart enough even to realize what is the front door – let alone more serious tasks.