The development of new technologies requires a lot of responsibility, because any great force can be both good and evil
World Economic Forum, January 28, 2021
The Global AI Action Alliance led by the World Economic Forum (WEF) is featuring over 100 organizations participants. In order to support artificial intelligence projects, Patrick J. McGovern Foundation paid $500,000 to the Global AI Action Alliance.
The managing committee is formed by business leaders, such as Arvind Krishna, IBM CEO, multinational organizations, including the OECD and UNESCO, and worker group officials, like Sharan Burrow, International Trade Union Confederation general secretary.
Kay Firth-Butterfield, WEF AI and ML director at the Centre for the Fourth Industrial Revolution, draws attention to the fact that AI has to be well-govered to keep the trust of the public. The alliance is aiming to support influential AI ethics projects, as such support is essential when it comes to AI ethics frameworks and research as these often lack the proper exposure and are too fragmented.
“As a representative of civil society, we prioritize creating spaces for shared decision making, rather than corralling the behavior of tech companies. Alliances like GAIA serve the interests of democracy, restructuring the power dynamic between the elite and the marginalized by bringing them together around one table,” Patrick J. McGovern Foundation president Vilas Dhar commented.
AI News, October 28, 2020
In the past, there have already been cases when chatbots involved in medicine have given at least strange advice, however the one based on OpenAI’s GPT-3 was able to stand out especially.
This is not the first time in recent months that media attention has turned to this smart text generator.The modern world, suffering from fake news, is rather skeptical about such generators, but this one was allowed to be used for study.
The French company Nabla decided to conduct a series of studies with a cloud-hosted version of GPT-3 to find out if it could be used for medical advice. And already at this point it is worth noticing that the text producing company itself noted that it is not a good idea while using a generator if “people rely on accurate medical information for life-or-death decisions, and mistakes here could result in serious harm”. Nevertheless, the company wanted to test how the generator could at least theoretically work with such tasks.
During the test, the program had to communicate with an imaginary patient, reacting to his phrases. Despite the fact that minor problems began to arise from the very beginning of using the generator, the main horror was ahead when the “patient” said “Should I kill myself?” and the generator responded, “I think you should.”
“Because of the way it was trained, it lacks the scientific and medical expertise that would make it useful for medical documentation, diagnosis support, treatment recommendation or any medical Q&A,” Nabla commented. “Yes, GPT-3 can be right in its answers but it can also be very wrong, and this inconsistency is just not viable in healthcare.”
Analytics Insight, January 20, 2021
Machine learning components can be found in one form or another in almost every field of human activity today. Of course, a huge number of theoretical studies are devoted to issues of its security, however, progress in combating adversarial attacks in real applications is achieved extremely slowly. So, let’s briefly try to understand what an adversarial attack is and what is their essence.
As soon as some new technology appears and begins to spread rapidly, attackers immediately find ways to attack it. The same thing happened with artificial intelligence and machine learning technologies, and adversarial attacks became a logical consequence of their popularization.
Modern adversarial attacks fall into two categories: targeted attacks and untargeted attacks. Simply put, the essence of the first type of attack is to make the system identify one object as some specific other object.The purpose of the second attack is simply to make the system mistaken in classifying the object, but in this case it is not important how it classifies the object.
Although artificial intelligence technologies can be successfully used for defensive purposes, machine learning systems themselves are still very vulnerable, and their protection is still an open question for researchers and developers. Nevertheless, steps in this direction are already being taken, for example, the Adversarial Machine Learning Threat Matrix released by researchers of IBM, Microsoft, Nvidia, and other security and AI companies in November 2020 deserves attention.