Towards Trusted AI Week 39 – Chinese artificial intelligence perpetuates gender biases and others

Secure AI Weekly admin todayOctober 4, 2021 210

Background
share close

Biases is something that can significantly affect how smart systems work


The AI-Bias Problem And How Fintechs Should Be Fighting It: A Deep-Dive With Sam Farao

Forbes, September 29, 2021

Sam Farao, one of Norway’s leading Iranian entrepreneurs, shares his thoughts on biases in AI, with a focus on fintech. Farao is currently the CEO of Banqr, a global financial center specializing in payment processing and revenue sharing partnerships.

Companies in this sector increasingly rely on AI and its machine learning capabilities to process and understand data and make decisions on issues such as creditworthiness, fraud detection / prevention and customer support, which in turn makes them more accountable in working with data and in decision making.

“Data is neutral, but that doesn’t mean it is innocent. Bias is a purely human phenomenon that can be introduced into or deduced by data, consciously or unconsciously, tainting the data with it,” commented Farao.

The occurrence of biases in the operation of such systems can lead to serious consequences, but it is precisely this risk that is often found in the initial data with which these systems must work, but baseline data is only one of the ways that bias can get into AI – on the basis of parameters that enhance biases much more. From Farao’s point of view, it is necessary to view AI not only as a tool for solving our problems as financial service providers, but also as a tool for solving more serious problems of society. Financial companies and financial service providers around the world must prioritize data mining. In addition, fintech companies must constantly be mindful of data that creates bias when building effective systems. Read more about protection against biases in the article.

The ‘CEO’ is a man: how Chinese artificial intelligence perpetuates gender biases

The Star, September 30, 2021

Despite the widespread use of artificial intelligence technologies, it is becoming known that algorithms designed to eliminate cultural bias have their own problems, and Chinese companies are no exception.

 Last Monday, the Mana Data Foundation, a Shanghai-based public welfare foundation, and UN Women, published a report where it highlighted that  systematic prejudices against women could be found in many programs: for example, major Chinese search engines including Baidu, Sogou and 360 tend to  return mostly images of men in response to words like “engineer”, “CEO” or “scientist”.At the same time, the words “woman” and «feminine» were often accompanied by references to materials of a sexual nature or female genders.

The report also included data on gender discrimination in new media, search engines, open source coding, employment algorithms and consumption patterns. The purpose of the report was to provide evidence of gender discrimination in AI algorithms. 

A Teenager on TikTok Disrupted Thousands of Scientific Studies

The Verge, September 24, 2021

Biases came from where they were not expected – the video of the girl from TikTok affected the statistics of scientists.

The video was released on TikTok 23  July. In it, a girl named Sarah Frank talks about an easy way to make some money and points users to Prolific.co. “Basically, it’s a bunch of surveys for different amounts of money and different amounts of time,” she explains. This video received 4.1 million views within a month of its publication and sent tens of thousands of new users to the Prolific platform.

Prolific is inherently a tool for behavioral research scientists that did not have free screening tools to make sure it provides a representative sample of the population for each study. Suddenly, however, scientists, accustomed to receiving a wide range of data from people of different genders and ages, saw their surveys flooded with responses from young women about the same age as Frank. 

At first, the reason for such a sharp bias with a bias towards female auditory was unknown, in addition, such a situation seriously threatened to undermine the reputation of the company conducting the polls – they were in no way prepared to be advertised as a way to make money, and that this would radically affect on the research results.

“Prior to Tiktok, about 50% of the responses on our platform came from women,” said Prolific co-founder and CTO Phelim Bradley . “The surge knocked this up as high as 75% for a few days, but since then, this number has been trending down, and we’re currently back to ~60% of responses being from women.”

Written by: admin

Rate it
Previous post