Towards Trusted AI Week 6 – The Future of AI Security

Secure AI Weekly + Trusted AI Blog admin todayFebruary 8, 2023 106

Background
share close

Navigating the Growth of Ethical AI Solutions in the new EAIDB Report FY 2022

EAIDB, January 31, 2023

The news media on a daily basis is awash with reports about breaches of privacy, algorithmic prejudice, and AI oversights. Society has shifted from being unaware to acknowledging that AI technologies and the vast amounts of data they are trained on pose a real danger to privacy, accountability, transparency, and fair society.

As a result, there is a growing interest in ethical AI services. From start-up leaders to corporate clients and society as a whole, ethical AI is gaining popularity among various parties in the innovation community.

The Ethical AI Database (EAIDB) project, which aims to provide insights into potential issues and find possible solutions, recently released a report. The EAIDB is a collaborative effort between the Ethical AI Governance Group and an ethical AI start-up focused on promoting responsible AI development, deployment, implementation, and management.

The EAIDB defines an “ethical AI company” as a company that either offers methods and tools to make existing AI systems ethical or produces products that eliminate elements of bias, unfairness, or unethical behavior in society. The number of such companies has seen a significant increase in recent years and new ones continue to emerge.

Cyber Insights 2023 | Artificial Intelligence

SecurityWeek, January 31, 2023

Artificial intelligence (AI) adoption is on the rise as various industries and society are recognizing the efficiency and cost savings it can bring. However, the potential danger posed by adversaries using AI as a weapon of attack rather than a tool for improvement remains unclear. According to Alex Polyakov, CEO of Adversa.AI, the years 2012 to 2014 marked the start of secure AI research in academia and it typically takes three to five years for these results to turn into practical attacks. He predicts that starting in 2023, hackers will have the ability to monetize AI vulnerabilities, resulting in widespread cyberattacks.

In the past, security teams used AI for anomaly detection, but limited resources and the upcoming economic downturn will drive a need for more automated responses. As AI becomes increasingly integrated into all aspects of business, security teams will need to defend the AI within the business to prevent it from being used against the company. This will become more challenging as attackers understand AI and its weaknesses, and find ways to monetize those weaknesses.

In the future, AI will be used for predicting events and actions focused on people, making it imperative for AI to be complete and unbiased. However, historical data for minority groups is often lacking and reinforces social biases, which can lead to prejudice and missed opportunities. Efforts to eliminate bias in AI have been made and will continue in 2023, but more work is needed to ensure that AI is accurate, complete, and unbiased.

Read full article with comments and thoughts from many experts at the link.

U.S. Marines Outsmart AI Security Cameras by Hiding in a Cardboard Box

PetaPixel, January 30, 2023

The United States Marines demonstrated the limitations of artificial intelligence (AI) security cameras in a training exercise. According to Paul Scharre’s upcoming book “Four Battlegrounds: Power in the Age of Artificial Intelligence”, the Marines were tasked with helping build the algorithms for the cameras developed by the Defense Advanced Research Projects Agency’s Squad X program.

After six days of training the algorithm, the Marines put the AI security cameras to the test. To their surprise, they were able to easily evade detection by hiding in a cardboard box, somersaulting, or walking like a tree. These strategies were not part of the data that the cameras were trained on, and as a result, the Marines were not detected.

The story highlights the importance of training AI algorithms with a diverse range of data. As Scharre notes, AI can only perform as well as it is trained, and it does not have the ability to understand its own limitations. This can lead to AI mistaking performance for competence.

In conclusion, while AI can outperform humans in specific tasks, the human ability to evolve and have a deeper understanding of the world will always give us an advantage in evolving situations. The story of the United States Marines and the AI security cameras serves as an important reminder of the limitations of AI technology and the need for continued development and improvement.

Adversarial machine learning 101: A new cybersecurity frontier

Dataconomy, January 31, 2023

Adversarial machine learning (AML) is a crucial aspect of cybersecurity that is garnering significant attention in today’s digital world. With the rapid growth of digital data and the constant evolution of cyber-attacks, the need for effective AML solutions is imperative. The discipline involves the development of algorithms, methods, and techniques to secure machine learning models from exploitation or manipulation. This is done to ensure the security and integrity of data, which has become incredibly valuable in the current digital age.

AML is a rapidly growing field that concentrates on creating algorithms that can withstand attempts to trick or mislead them. The idea is to develop models that are robust against adversarial examples, which are inputs crafted to confuse the model. These examples can be in the form of minor perturbations to an image, fake data meant to trick a recommendation system, or other forms. As machine learning algorithms are used in more critical applications, such as healthcare, cybersecurity, and autonomous vehicles, the importance of ensuring that these algorithms are robust against adversarial attacks is growing.

One of the major challenges in AML is to develop models that can generalize to new types of attacks. Adversaries are constantly discovering new methods to trick algorithms, and researchers are working to develop algorithms that can defend against these attacks. This involves creating algorithms that can detect when they are being attacked and adapt their behavior to resist the attack. Although the field of AML is still in its early stages, the progress made thus far highlights the importance of this research, and it is expected to continue to grow in importance in the coming years.

Researchers Prove AI Art Generators Can Simply Copy Existing Images

Gizmodo, February 2, 2023

The use of artificial intelligence (AI) in art generation has faced criticism over the potential threat to copyright and privacy. Researchers from tech giants such as Google and DeepMind, as well as universities such as Princeton and UC Berkeley, have recently conducted a study which found that AI image generators can memorize images from the data they have been trained on. This means that instead of generating new images, certain prompts will result in the AI simply reproducing images that may have copyrights or contain sensitive information.

This study is a continuation of earlier research on AI language models and sheds light on the limitations of AI art generators. The research found that both Google’s Imagen and Stable Diffusion, a popular open source platform, were capable of reproducing images with similar implications. The process of memorization was discovered to be relatively simple, as the researchers ran the same prompt multiple times and manually checked if the resulting image was in the training set.

Although the researchers noted that the memorization rate was low at 0.03%, they also emphasized that as AI systems become larger and more sophisticated, there is a higher likelihood of AI-generated material being duplicated. Moreover, they highlighted the privacy risks posed by generative AI, as even attempts to filter data do not prevent training data from leaking through the model.

The findings of the study have important implications for AI art generators and the AI companies that offer licenses to users to monetize the AI-generated content. While AI developers may consider the low memorization rate as a low risk, the study highlights the potential for images that people would not want to be recopied, such as medical records. As AI technology continues to evolve, it is crucial for companies to consider the ethical implications of AI and work towards creating systems that protect privacy and respect copyright.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post