What is Secure and Trusted AI

Articles admin todayNovember 11, 2020 1363 5

Background
share close

What is Trusted AI?

Asking Google to build a route feels like doing the trust fall exercise. Behind me there is a vague understanding of AI and promises that it can be trusted, help unbiasedly and keep my data secure. A team of AI creators I have never met is supposed to catch me. But will they? I’d like to make sure of that before I fall.

 

Subscribe for the latest LLM Security news: Jailbreaks, Attacks, CISO guides, VC Reviews and more

     

    Artificial Intelligence is an imitation of a human mind that bypasses our flaws. No tiredness, infinite attention span, perfect analytical capabilities. It does wonders in tasks that involve large amounts of data, complex calculations and monotonous analysis. What is surprising about it is how many tasks fall under this description: from screwing on parts to driving, from recognizing weapons in luggage to identifying people on footage. AI is a powerful tool. Still, it is only an imitation of a human mind.

    Apart from our skills, we, people, have the gifts of selfishness and morality. We make decisions that serve us and our values best. AI, on the other hand, is amoral. It bases its decisions purely on calculations. Its creators can introduce values as factors in the calculation process, but these creators have their own understanding of morality. Can we trust an amoral machine to make decisions? Can we trust its creators?

    Academics and governments alike are currently trying to formulate a set of criteria that would determine whether a particular AI is trustworthy. 

    From government initiatives and scientific papers from all over the world we have formulated PFASSTERS, which stands for the following:  Privacy; Fairness; Accountability; Safety; Security; Transparency; Ethics; Robustness; Sustainability.

    We believe that this set of criteria is enough to ensure the trustworthiness of any AI from both the technical and moral standpoint.

    Trustworthy AI Criteria

    Also, we decided to divide these nine criteria into three groups for ease of understanding. the Reliable group includes robust, accountable and transparent characters; The Resilient group consists of safe, secure and private criteria;  and the last Responsible group is fair, ethical and sustainable. We will discuss these groups further.

    Reliable AI: share the route

    This group of characters is all about how you will take control over various processes run in your system under different circumstances. AI that is up to all these criteria is controllable and acts straightforwardly.

    What is AI Robustness

    Ultimately, a robust system is able to function as intended under adverse conditions. These conditions include everything from harsh environments and perturbed inputs, to adversarial attacks and design errors. We expect “a system to resist change without adapting its initial stable configuration”. (Sekar, 2015) 

    If this principle is fulfilled, we can rely on outputs being consistent and start worrying about explaining them. 

    To achieve the level of robustness worthy of trust, developers address all known vulnerabilities, doing the best to avoid them or at least build protections around them. The hard part is planning and designing the system with all the yet-to-be-discovered vulnerabilities in mind. This is why robustness is built from the  ground up, from complete security measures to something as simple as clear comments in code that ensure continuity and allows for perfecting the code.

    What is AI Accountability

    Accountability guarantees that the system can be assessed for other elements of PFASSTERS before it is deployed, while it is in action and post factum. The possibility for accountability also has to be built in, but we consider it a “moral” criteria. The reason for that is simple. Most companies already create accountable AIs and it is necessary to monitor progress and prevent loss of profits. Yet, they may not allow access to the reports to the agencies that care about the people on the money. In short, accountability is a combination of coded mechanism and a company’s willingness to produce and release reports on what AI does and how it is done. 

    What is AI Transparency

    In keeping with the metaphor of the trust fall, this team-building exercise is much easier to perform when you know what and who is behind you. This awareness is achieved by transparency. At the very least, users need to know that they are interacting with an AI, not a human. At the very most, the technical process, the data and the human decisions that contribute to the system’s conclusion should be documented and clearly explained.

    An inverse relationship exists between the accuracy of AIs and its explainability. ML models perceive and reason differently than humans, e.g. they notice highly predictive patterns that humans do not. We can choose solutions that are transparent, we can build systems to be interpretable. The question is how much are we willing to trade for this. 

    Imagine choosing a personal assistant. You will undoubtedly assess their skills, work experience and previous achievements, their ability to think critically and logically. Also, you might want to make sure they are not a thief or a psychopath. In other words, you might want them to share your values and follow the law. The same way, creating an AI that inspires trust involves more than reliable algorithms and defenses. The rest of the article will focus on so-called “moral” aspects of AI: ethics, fairness and accountability. Left unfulfilled they may not affect the performance of the algorithm, but will damage the public’s trust in the system and, ultimately, may lead to complete ban of AI.

    Resilient AI: do no harm and deflect attacks

    As the name suggests, the group consists of characters that cover various aspects of safekeeping and ensure persistence of implemented data and the system in general. 

    What is AI Safety

    Safety is the cornerstone of trustworthy AI. The principle is basic and primary: “Primum non nocere  —  first, do no harm”. Not to humans, not to the natural or social environment, not in any other circumstances. To meet this criterion, AI engineers consider how the system can be used and misused, intentionally or unintentionally. Then, they identify, estimate and reduce the risks AIs could pose in all foreseeable scenarios. 

    Keep in mind that we do not say that all the risks get eliminated, but merely reduced to a tolerable level. The scientists are not to blame for this, they do their best to insert safeguards, minimize errors and unintended consequences. Safety, though, can never be absolute, neither can the other principles that make up PFASSTERS. 

    What is AI Security

    To protect existing AIs from attacks engineers employ measures of security. Specifically, measures that secure information within systems. They are universally designed to maintain Confidentiality, Integrity, and Availability also known as the CIA Triad. The former is dependent on separating information based on the degree of sensitivity and providing selective access to it. To maintain Integrity information is protected from unauthorized deletion and change. Finally, availability stands for making sure that data can be accessed whenever it is needed.

    What is AI Privacy

    The principle of privacy also centers on preventing harm, less apparent, but, on the scale of a human life, potentially devastating. AI can act in opposition to privacy. Smart technologies can be used to profile or classify individuals without their consent. AI can also be exploited to violate privacy. For example, the data they are trained on can be stolen and de-anonymized. 

    We can not measure the sense of security lost when medical data is leaked. We cannot measure the dignity damaged when a de-identified search history is matched to a person. We can measure the distress:  81% of Americans believe that data collection poses more risks than benefits. 

    Privacy is a trade-off. The accuracy and reliability of AI widely depends on the amount of data it uses. If we limit the data available to train AIs, we face the threat of bad judgements. So we resolve to ensure the integrity of data, reinforce access protocols and continually improve the manner of protection.

    Responsible AI: play fair and be respectful

    The last group of characteristics is mostly dedicated to the aspects of ethical application that would not harm any human rights neither now nor in the future.

    What is AI Fairness

    The problem of bias in AI is the most publicised and for good reason. Again, AI is an amoral object. When it is taught on the skewed or incorrect data, it picks up the biased notions and treats them as facts. Then, without any pressure from society or moral values, the unfairness is amplified and applied to the tasks, wreaking havoc on individual lives and perpetuating unfairness in society. 

    There is no such thing as an unbiased human. So for now, it is unclear how we can imitate a mind that doesn’t exist. Still, there are things we can do to prevent AI solutions from discriminating against people and products. The first step is to be vigilant and obsessive about the fairness of data used to train the models. The next step is to continually improve the accuracy and scope of the data to correct whatever injustices get discovered. 

    What is AI Ethics

    Artificial intelligence has to follow our laws. In other words, as extensions of our analytic and decision making power, AIs have to subscribe to societal norms. When it comes to documented and enforced norms, i.e. laws and regulation, compliance is straightforward albeit technically challenging. The real dilemmas start where the government enforcement lags behind the times or takes a laissez-faire approach.The responsibility to determine the ethical balance between stakeholder interests, between means and ends, between rights to privacy and data collection, fall on AI engineers. 

    What is AI Sustainability

    So, we have come to the last (but not least) characteristic, which is sustainability. Being widely neglected nowadays, this very criteria is about meeting today’s needs of the end-user without sacrificing the interests of future generations. It is very important that the actions that we make today do not put under risk our future making all our achievements and wealth temporary.

    How to make trust-worthy Artificial Intelligence

    PFASSTERS has 9 elements. Each of them requires a unique approach and extensive multidisciplinary research. For instance, AI engineers should not have to be responsible for filling the gaps in legislation and culture. 

    Still, the solution is not as distant as it may seem. According to Gartner, the efforts put in maintenance of  the integrity of a system and the information it contains boil down to the PPDR Framework, which implies the following:

    Predict

    It is not a coincidence that the term “damage control” comes from the realm of PR, not engineering. To protect consumers, the only acceptable approach is thinking ahead. Researchers and creators of AI work to identify and remedy vulnerabilities before systems are deployed. 

    Prevent

    Software and hardware that comprise AI systems should be constantly updated to keep the attacks from being successful.  Also, to avoid a total loss, they are regularly backed up and scanned in the process. Users become aware with the help of the best practices that ensure systems security. Though, we seem to never learn.

    Detect

    AIs are policing AIs. In case attackers managed to break into the system, behavioral analytics is employed to pinpoint suspicious activity and sound alarms. Then, security specialists set out to find all affected devices and reverse all changes brought on by the malefactors.

    Respond

    Responses include disconnecting at the first signs of attack and recording the method and objective of attack. The action is all-encompassing in a sense. Not only the response function prompts the system to minimize the damage, but also ensures that engineers get the data they need to prevent similar breaches.

    The solution exists in the face of Red Teaming. It is simple in theory and grueling in practice. Fortunately, its implementation can be outsourced to tried and true experts and innovative startups. Spearheading the movement is Adversa. Others will follow. The only thing left for companies and society to do is take action. 

     

    BOOK A DEMO NOW!

    Book a demo of our LLM Red Teaming platform and discuss your unique challenges


      Written by: admin

      Rate it
      Previous post