Towards Trusted AI Week 51 – Exploring the Risks of AI-Based Cyberattacks

Secure AI Weekly + Trusted AI Blog admin todayDecember 20, 2022 108

Background
share close

China’s rules for “deepfakes” to take effect from Jan. 10

Reuters, December 12, 2022

In an effort to more closely scrutinize the use of “deepfake” technology and services, the Cyberspace Administration of China (CAC) has announced new rules that will take effect on January 10th. Deepfakes involve altering facial and voice data to create images that are virtually indistinguishable from the original and can be easily used for manipulation or spreading misinformation. With these new regulations, the CAC aims to protect individuals from being impersonated without their consent through the use of deepfake technology.

In addition to protecting individuals, the CAC also intends for these new rules to promote the healthy development of the industry. This includes regulating “deep synthesis service providers,” or platforms that utilize deep learning or virtual reality to alter online content. By implementing these regulations, the CAC hopes to mitigate any potential risks that may arise from the use of these platforms.

Overall, these new rules demonstrate the CAC’s commitment to ensuring the responsible and ethical use of deepfake technology in China. By regulating and scrutinizing the use of this technology, the CAC aims to protect individuals and promote the healthy growth of the industry.

Read about these rules at the link

HOW NEW TOOLS LIKE DALL-E COULD SPREAD MISINFORMATION ONLINE

Inverse, December 10, 2022

Text-to-image generators, also known as image synthesis engines, use artificial intelligence and neural networks to take a written description and produce an image that represents it. This technology has the potential to revolutionize the way we create and consume media, allowing us to easily generate images based on our imagination or to quickly illustrate complex ideas and concepts. However, the potential for misuse of this technology has also raised concerns, as deepfakes – videos that show someone doing or saying something they didn’t – have already been used for nefarious purposes, including nonconsensual pornography, fraud, and disinformation campaigns.

One example of a text-to-image generator is OpenAI’s DALL-E, which was released on January 5, 2021, and quickly gained popularity for its ability to produce bizarre and fantastical imagery. A beta version of the tool was made available to 1 million users on July 20, 2022. While the release of DALL-E was met with excitement and fascination, it also sparked a conversation about the potential consequences of this technology. Some have raised concerns about the possibility of creating fake news or propaganda through the use of these image generators, or the potential for abuse by those who wish to disseminate misinformation.

To mitigate these risks, each company has approached the release of its text-to-image generator differently. Google, for example, has decided not to release its version of the technology at all, citing concerns about the potential for abuse. OpenAI, on the other hand, released a beta version of DALL-E to 1 million users with a strict content policy in place. Stability AI released a version that includes a watermark to help identify generated images. As the technology continues to advance and become more widely available, it will be important to consider the ethical implications and potential consequences of its use. While the potential for text-to-image generators to revolutionize the way we create and consume media is exciting, it is also important to recognize the potential for misuse and to take steps to prevent it.

Read full article at the link

Report Predicts Rise of AI Based Cyberattacks in Next 5 Years

RedmondMag, December 15, 2022

Artificial intelligence (AI) has the potential to revolutionize many industries, but it also brings new risks and challenges. According to a recent report, AI-based cyberattacks on organizations could increase in the next five years. These attacks may already be used by nation-states, but could eventually be adopted by criminal groups as well. The report, written by security company WithSecure and commissioned by the Finnish Transport and Communications Agency Traficom, analyzes the potential development of AI-based threats over the next five years.

Currently, AI techniques are mostly used for social engineering in attacks, such as impersonating voices. There are also tools that use AI for the early stages of attacks, including CAPTCHA breakers, password guessers, vulnerability finders, phishing generators, and deepfake generators. In the next two to five years, AI could be used for information gathering or to evade detection techniques. There is also a risk that AI used for defensive purposes could be exploited by attackers and repurposed for attack tools.

The report is skeptical about the possibility of AI-based “autonomous malware” attacks in the long term, as current AI techniques are not advanced enough and attackers would face obstacles, including detection. However, the report predicts that AI-based attacks will become more common among less skilled attackers in the next five years as AI knowledge advances and advanced techniques are passed down from nation-state attackers. The use of AI-based attacks may also increase if defenses for traditional cyberattacks improve, potentially motivating attackers to turn to AI-based techniques. 

Alex Polyakov, CEO of Adversa AI, recently published a chapter in a book discussing the potential risks of utilizing artificial intelligence. The study, conducted by Alex & Adversa Team, offers valuable insights on the topic.

Read this full article at the link

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post