We’ve come a long way since the ideas of advancing technologies first started taking shape. Since the era of the 1940s where humans thought creating an AI that directly comes in contact with humans could lead to a dangerous consequence, we are here where robots are performing surgery. Not only this, but we have artificial intelligence-based droids that look like humans and assist old people to live longer and healthier lives.

Similarly, we are also much distant to the past where technology was an emerging debate and only in the hands of a few researchers.

Look at us today. We are surrounded by technology and live in the era of artificial intelligence. Be it the voice assistant of a smartphone or a wearable health tracker, home automation lights, cars, e-commerce platforms, search engine among others.

Artificial Intelligence has penetrated our lives and became a necessity without even realizing it. But, this doesn’t mean we are any closer to the future. We have barely learned to harness the complete potentials of emerging technologies like AI and already facing social and ethical implications due to it.

The world economic forum claimed in 2016 that we are experiencing a new revolution across the word. It is the fourth of the industrial revolution or automation using cyber-physical systems and powered by technologies like machine learning, blockchain, genome editing, decentralized governance, etc.

The changes on account of the fourth industrial revolution are many, but so are the underlying challenges.

Taking a look at the previous three waves, this one seems no different. The advent of new technologies and the wave that comes along with it, aim at reducing the need for human labour but pose several ethical challenges along with it.

The ultimate source of bias- humans

Today, artificial intelligence is not just being used for research purposes, but also in sensitive areas that were once fully governed by humans. Be it hiring Java developer, criminal justice, education or healthcare, AI is aiding the decision making from all walks of life. But, the point is that the foundation of AI is laid by the way humans function.

And humans have biases. Whether we like it or not, bias is unfavorable, yet a feature of life. It is the result of a mandatory limited view of the world that any single person or group can achieve. This social bias just as it reflects in humans in sensitive areas can also reflect in artificial intelligence. It can manifest in dangerous ways, from deciding who gets justice or surveilled to who gets a bank loan.

Human biases are well documented throughout history. From implicit association tests to field experiments, biases affect several outcomes. These biases have also penetrated ML models designed by humans. In 1988, the United Kingdom Commission for ethnic equality found a British medical school guilty of discrimination.

The school was using a computer program that shortlisted the people who would be called for interviews. It was found that the program was biased against women and those who had non-European names.

Surprisingly, the program was designed to match human decisions with 90- 95 per cent accuracy. However, the school ended up with a higher proportion of non-European students admitted than any other school in the UK. Using an algorithm did not resolve the human bias problem and neither did return to traditional human decision making.

Years later in another instance, the investigative news site Pro Publica found that a criminal justice algorithm used in Florida mislabelled African American defendants as high risk. This was at twice the rate it mislabelled white defendants.

Similarly, training natural language processing models on news articles are leading them to display gender stereotypes.

How biases enter AI?

Research at IBM suggests that in the coming five years, the number of biased artificial intelligence-backed systems and algorithms will increase many folds. In other words, AI will explode, but only unbiased AI will survive.

But, researchers are learning to deal with this bias and coming up with new solutions that control and finally free AI systems out of it. However, it’s not just enough to know that a bias exists. If we want to fix it, it is crucial to know how it arises in the first place. After all, why would we need AIs if they make decisions as biased as humans?

There are many ways that biases crawl up an AI algorithm. The models built on those algorithms can reflect the biases of organizational teams, designers from those teams, scientists who implement the models along with engineers who gather the data.

Furthermore, they also reflect the bias inherent in the data. But the reality is more nuanced. Bias can enter a model.

Long before the data is collected as well as several other stages of the leaning process.

1. Framing the problem

The first thing that researchers do when creating a deep learning model is deciding what to achieve. Even if they have a nebulous concept they want to measure, they would break it down into something meaningful by finding a parameter to maximize.

The concept now gets described in terms of their computational goal. Therefore, if the algorithm discovers that a certain decision is key to maximizing that goal, it would end up engaging in predatory behaviour, even if that wasn’t the organization’s intention.

2. Collecting data

The data collection process is a tricky job as it can let biases creep in two ways. First, the collected data might be unrepresentative of reality or second, it already consists of some existing prejudices.

For example, if an algorithm is fed more pictures of light-skinned people than dark ones, it would inevitably be worse at recognizing dark-skinned people.

In another instance, Amazon stopped using a hiring algorithm because it favoured words like executed or captured that were most commonly found on men’s resumes.

It was found that since the algorithm was trained on historical hiring decisions that favoured men over women, it learned to do the same.

3. Preparing the data

It is also possible to introduce bias in an algorithm as the data is prepared. Biases crawl up during a process called feature selection that aims at selecting the attributes you want the algorithm to consider.

This might be termed as the art of deep learning, as feature selection impacts a model’s prediction accuracy. While the impact on accuracy seems like an easy parameter to measure, the impact on the model’s bias remains concealed.

Conclusion

While there are multiple entry points for a bias, researchers are working on algorithms that can help detect and mitigate these hidden biases in the data. Scientists at IBM have devised an independent bias rating system with the capability to determine the fairness of an AI system.

As AI systems continue to find, understand and point out the inconsistencies in the decision-making process, they could also shed light on ways that humans are cognitively biased.

Editor’s note: e27 publishes relevant guest contributions from the community. Share your honest opinions and expert knowledge by submitting your content here.

Join our e27 Telegram group here, or our e27 contributor Facebook page here.

Image Credit:  Franck V.

The post The real world bias issues of AI appeared first on e27.