Towards Trusted AI Week 6 – Transparency and accountability for automated decision systems, and others

Secure AI Weekly admin todayFebruary 7, 2022 82

Background
share close

New rules could greatly improve the ethical use of artificial intelligence


China wants deepfake providers to be responsible for their creations

Protocol, February 3, 2022

The Cyberspace Administration of China has published a new draft. Under the new bill, new oversight obligations will be imposed on deepfake technology developers.

The innovations relate to “deep synthesis Internet service”, including any technology that generates text, images, audio, video or virtual scenes based on deep learning, in addition, widely used artificial intelligence tools such as GPT-3 are also subject to new regulations.

Under the new rules, deepfake developers will be required to verify the identity of each user and actively check the results of their services for potential ethical and moral violations. Providers will also be required to “embed tags that do not interfere with the user experience” to help make all deepfake content identifiable and traceable. Also, any content designed to mimic real human images or sounds will have to meet more stringent standards.

“One interesting piece of this is that pressure is placed on multiple actors to comply — app stores, developers, platforms all have obligations here,” Kendra Schaefer, partner at research organization Trivium China, commented.

Wyden, Booker and Clarke Introduce Algorithmic Accountability Act of 2022 To Require New Transparency And Accountability For Automated Decision Systems

U.S. Senator Ron Wyden of Oregon, February 3, 2022

U.S. Senator Ron Wyden, D-Ore., with Senator Cory Booker, D-N.J., and Representative Yvette Clarke, D-N.Y., introduced the Algorithmic Accountability Act of 2022. This is a bill created to provide transparency and control over software, algorithms and other automated systems that are used to make important decisions in various aspects of life.

The project is co-sponsored by democratic Sens. Brian Schatz, D-Hawaii, Mazie Hirono, D-Hawaii, Ben Ray Luján, D-N.M., Tammy Baldwin, D-Wis., Bob Casey, D-Pa., and Martin Heinrich, D-N.M. Under the new bill, companies will need to assess the impact on bias, efficiency and other factors when using automated decision-making systems to make important decisions. In addition, the Federal Trade Commission’s repository of these systems will become public for the first time; 75 employees will be added to the commission to enforce the law.

”Transparency and accountability are essential to give consumers choice and provide policymakers with the information needed to set the rules of the road for critical decision systems,” Wyden commented.

Securing AI during the development process

Tech Target, January 25, 2022

A sad fact is that the data of AI systems can be damaged or “poisoned” by intruders. The good news is that there are protective measures that can be taken during the development phase to keep systems secure.

Companies can secure AI software systems using secure development approaches such as DevSecOps. There are also secure libraries and even safer languages, Rust is one of them. Ideally, the application development process should include large-scale automated security testing in the functional test suite. Similar tests should be run with every update. Failure to function securely means failure to function, so tests should include static code scanning, dynamic vulnerability scanning, and scripted attacks.

A number of artificial intelligence systems are designed to adapt over time in response to the data they receive. At the same time, attackers can corrupt data streams at the training stage. To reduce the chance of exposing sensitive data, IT may consider placing DLP solutions in front of AI. Behavioral threat intelligence also reduces the risk of attacks or some other vulnerability triggering. Read more about protecting artificial intelligence systems during development in the article.

 

Subscribe for updates

Stay up to date with what is happening! Get a first look at news, noteworthy research and worst attacks on AI delivered right in your inbox.

    Written by: admin

    Rate it
    Previous post