Artificial Intelligence: Critical to Address Bias Concerns

Given the large number of biases within Indian society, which are themselves seen in policies and speeches alike, a translation of the same into Artificial Intelligence is dangerous.

0 1,371
Photo credit: HiteshChoudhary/@hiteshchoudhary

From finding insurance policies to diagnosing illnesses, Artificial Intelligence (AI) is everywhere. AI has entered every aspect of our lives and it is increasingly becoming normalised. Although in its nascent stage, AI has made way into law, medical science and human resources.

AI systems are only as good as the data we put into them. Bad data can contain implicit racial, gender, or ideological biases. Many AI systems will continue to be trained using bad data, making this an ongoing problem. Human prejudices translate into AI and as AI learns automatically and grows, the discrimination is amplified. It picks up the stereotypes and prejudices of humans from books, articles, and social media online. Hence, one can say it is the automation of bias.

We generally see AI as something very neutral in nature, which is a very dangerous assumption, given that AI is quite biased in nature.

Let us consider Amazon’s system of screening applicants to portray the truly discriminatory nature of AI. The system learnt that male candidates were preferable when it was trained to observe patterns in the resumes submitted to the company over a span of 10 years. The system began down marking resumes which featured the word ‘women’ because most resumes came from men and since the AI was trained on the basis of historical hiring decisions, which favoured men over women and so learned the same.


Also Read : Technology As An Enabler Artificial Intelligence For Accessibility

According to ProPublica, an investigative journalism organisation, a computer program used by the US courts across the nation has been reported to be biased against black prisoners. The program, named the Correctional Offender Management Profiling for Alternative Sanctions, mistakenly flagged black defendants as likely to re-offend at almost twice the rate as white defendants (45 to 24 per cent). The program likely factored in the higher rates of arrest for black people into its predictions, but was not able to escape the same racial biases that contributed to those higher levels of arrests. Bias has also been reported in granting credit to home buyers, even going as far as to potentially violate the Fair Housing Act. Rates of defaulting may be higher in some neighbourhoods, but an algorithm using this information to make black and white calls runs the risk of heading towards “red-lining” territory. Examples abound, with plenty of cases to show AI and technology to be both sexist and racist. Let’s not forget Google’s search algorithm including black people in the results of a search on “gorilla.”

Human biases are well-documented from implicit association tests that demonstrate biases we may not even be aware of to field experiments that demonstrate how much these biases can affect outcomes. Over the past few years, society has started to wrestle with just how much these human biases can make their way into artificial intelligence systems — with harmful results. At a time when many companies are looking to deploy AI systems across their operations, being acutely aware of those risks and working to reduce them is an urgent priority.

The appeal of AI systems is the idea that they can make impartial decisions or are absolutely neutral, free of human bias.

This has been proven to be wrong multiple times. AI is also spreading into the healthcare industry, but even data in health is not free of biases against women. It is because they learn by looking at the world as it is, not as it ought to be. Many times language translation machines have assumed doctors to be ‘male’ even when a gender-neutral term is used in the native language or no gender is mentioned. These systems are more likely to associate positive terms with Western names as opposed to names from other parts of the world. A study in Boston University created an algorithm which labelled women as homemakers and men as software developers using Google News Data.

Photo credit: Franki Chamaki/@franki

The problem is not entirely new. Back in 1988, the UK Commission for Racial Equality found a British medical school guilty of discrimination. The computer program it was using to determine which applicants would be invited for interviews was determined to be biased against women and those with non-European names. However, the program had been developed to match human admissions decisions, doing so with 90 to 95 per cent accuracy. What’s more, the school had a higher proportion of non-European students admitted than most other London medical schools. Using an algorithm didn’t cure biased human decision-making. However, simply returning to human decision-makers would not solve the problem either.

In the Indian context, given a large number of biases within our society which are themselves seen in policies and speeches alike, a translation of the same into AI is dangerous. The lack of representation of various communities and ethnicities in tech makes the occurrence of these biases more likely. States like Uttar Pradesh, Rajasthan and Uttarakhand are already using software for facial recognition along with digital criminal records.

Bias is all of our responsibility. It hurts those discriminated against, of course, and it also hurts everyone by reducing people’s ability to participate in the economy and society. It reduces the potential of AI for business and society by encouraging mistrust and producing distorted results.


Also Read : Artificial Intelligence The Solution To Reducing Backlogs In Indian Courts

Business and organisational leaders need to ensure that the AI systems they use improve on human decision-making, and they have a responsibility to encourage progress on research and standards that will reduce bias in AI.

The primary problem occurs during the collection of data. Wherein either the data are not reflective of the status quo, for instance, when an algorithm is fed only data of one race, making facial recognition AI inherently bad in recognising various races. To illustrate, a study by MIT ‘Gender Shades’ revealed that systems by companies like Microsoft and IBM for gender classification showed a high error rate of 34.4 per cent for dark-skinned females as opposed to light-skinned males.  A similar problem occurs when the data in itself are prejudiced.

Ensuring the removal of biases is a continuous process like is the case with all forms of discrimination. It requires long-term research and investment by multiple disciplines. Google, for instance, is investing time into ensuring their AI is not discriminatory. Recognising that facial recognition and camera were not picking non-white skins adequately, they developed technology to detect the slightest differences in light. They went on to have their home assistants and speakers have insults and slurs hurled at them to be able to check how the AI reacts to these terms and where to fix it. Sorting out AI’s discriminatory behaviour hence requires an active effort from all people involved in the process of creation and testing to recognise and catch onto the problems.

A crucial principle, for both humans and machines, is to avoid bias, and therefore prevent discrimination. Bias in AI system mainly occurs in the data or in the algorithmic model. As we work to develop AI systems we can trust, it is critical to develop and train these systems with data that is unbiased and to develop algorithms that can be easily explained.

(Slider Image credit: Amanda Dalbjorn/@amandadalbjorn)