Secure and Trusted AI presentations from NVIDIA GTC 2021

Event Overviews admin todayNovember 10, 2021 202

Background
share close

NVIDIA GPU Technology Conference (GTC), November 8-11, 2021, is an event that touches on a wide range of topics including developments in the field of artificial intelligence, graphics, data centers and more. Including at the conference this year, reports were presented covering the topic of trusted and secure AI, which we would like to dwell on in more detail.

AI in Fintech: With Great Power comes Great Responsibility [A31358] by Kevin Levitt, Theodora Lau and Spiros Margaris

AI is spreading across all industries, and the financial services industry is no exception where artificial intelligence is used, for example, for customer service, fraud prevention and cybersecurity. While the field is no longer limited to banks and financial technology, bringing big tech companies and retailers into the market, artificial intelligence is being used in a huge number of applications involved here, but along with the opportunities that smart technologies open up in this matter, there is also a huge responsibility to develop such artificial intelligence methods that could be relied on.

Automating AV Verification and Validation [SE31478] by Justyna Zander and Ahmed Nassar

The talk is dedicated to the building blocks to achieve fully automated verification and validation of autonomous vehicles. Justyna Zander and Ahmed Nassar will briefly discuss HSDL Scenario Generation and discuss in details the HSDL Observer Engine, which is used for simulation and replay. Ultimately, the ultimate goal is to come up with a framework that will allow developers to create self-validating AV driving functions.

Deploying Trusted, Deep Learning-based Face Recognition: At Scale, in Real Time, Anytime, Anyplace (Presented by Dell Technologies) [A31753] by Kyle Harper and Benji Hutchinson

Despite the sheer number of computer vision-based face recognition solutions, the choice is significantly narrowed when it should be done at scale, in real time, and with high accuracy. The situation is even more complicated if the system is to run on a fixed workstation and be easily transportable and suitable for work in a variety of conditions. As part of the talk, Kyle Harper and Benji Hutchinson discuss how  Paravision’s face recognition works with Dell Technologies’ data science workstations while being able to search 10 million identity records in less than 50 milliseconds.

Developing Trustworthy AI Principles [A31239] by Nikki Pope, Ricardo Chavarriaga, Cathy Cobey, Will Griffin, and Joaquin Quiñonero Candela

There are a huge number of potentially trustworthy principles of artificial intelligence – practical and simply aspirational, very specific and general. Any company that creates applications for consumers or for business sooner or later is faced with a choice of similar principles. The discussion will touch upon the development and selection of robust artificial intelligence principles for organizations.

Keeping Up with Global AI Trust Regulations [A31071] by Bea Longworth, José Luis Flórez Fernández, Simon Chesterman, Yi-Ling Teo, and Sue Daley

In this discussion, experts from different geographic regions raise issues related to current and proposed legislation related to AI, and the impact of new laws and regulations on the development of AI.

Measuring and Mitigating Bias in AI Models [A31241] by Anima Anandkumar, Nicol Turner-Lee, Miriam Vogel, and Matt Mitchell

Biases are caused by completely different factors inside and outside the system – from the moment of development to the people who interact with the output data. It happens that the system itself sees patterns that are imperceptible for a person, and the free turn gives rise to new biases. Any bias can negatively affect the entire system as well as the experience of people interacting with it. The discussion addresses the issues of how to understand what an algorithm has biases and how to minimize their impact on on people and society, as well as what fairness is, and how to build trust in fair AI.

The Path toward Trustworthy AI and Autonomy (Presented by Lockheed Martin) [A31636] by Mauricio Castillo-Effen

Artificial intelligence is gaining more and more influence in our lives, but the issue of the barrier of “high criticality” of such systems is quite acute. In such applications, unexpected behavior is unacceptable, as a result of which undesirable consequences may occur, including significant material damage or even harm to people. The report introduces providing a landscape for autonomous and AI-enabled systems, resilient issues, and an overview of specific tools. For example, some NVIDIA tools and methodologies are being used that may accelerate their implementation in the future.

Towards Implementing Trustworthy AI, with an Example in Financial Services [A31542] by Jochen Papenbrock, Jochen Biedermann, Jayant Narayan, Melissa Koide, and David Thogmartin

 The discussion is conducted by experts from different geographic regions and from different global organizations on the possibilities of implementing trustworthy AI and compliance with current and future AI regulations, with a particular focus on financial services as the leading sector in this area.

Trustworthy AI: A Global Perspective [A31743] by Keith Strier and Kay Firth-Butterfield

Keith Strier from NVIDIA and Kay Firth-Butterfield from the World Economic Forum have a broad discussion on the topic of  trusted AI and its  issues that  companies, countries, and society face.

Written by: admin

Rate it
Previous post

Similar posts