Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Opinion | Beware the dark side of Artificial Intelligence

Source- livemint.com

Artificial intelligence (AI) is becoming ever more powerful. Consulting firm PwC estimates that AI could contribute up to $15.7 trillion to the global economy in 2030, more than the combined GDP of China and India today. The technology will soon be omnipresent—from household appliances to our financial, law and justice systems.

That is why we should be very worried about the dark side of AI. And this is not about devilishly powerful AIs making humans slaves, as depicted in science fiction. The danger is much subtler.

AI will help us make decisions, and in many cases, take decisions for us, but AI is only as good as the data that is fed into it. The data is worked on by deep-learning software, which absorbs the data, figures out patterns, creates rules to fit the patterns, and keeps tweaking those rules as more data is fed into it.

In many cases, the programmers themselves are unaware of how the AI reaches decisions. The workings are so complex that they become opaque.

These masses of data are fed into the system by humans. And as humans, we all carry prejudices, consciously or unconsciously. This may colour the data we give the AI to crunch. Also, all the data that is fed is current or historical. It will reflect societal biases.

For instance, if an AI is fed the resumes of candidates for a top corporate job, it is almost certain that the system will choose a man, because data shows that men have overwhelmingly outnumbered women as CEOs in the past.

Going by the data, the AI will decide that a man will make a better CEO than a woman.

This data may have nothing to do with the talents or competence of women managers but the fact that they were not promoted due to gender bias. But the AI will never know that. It does not and cannot have any concept of “fairness”. It only knows data. And the idea of what is fair differs from society to society.

The notions also change over time as societies evolve. But the AI will never know that.

Inherently biased data can affect credit ratings, insurance plans, one’s higher education and career. In fact, it can change one’s life.

In 2016, an investigation by American non-profit organisation ProPublica found that COMPAS, an AI-driven software that assesses the risk of a person committing a crime again, was biased against people of colour. But judges in some US states are still using the software.

In 2015, Google had to apologize after its photo app tagged two black people as gorillas—perhaps because the algorithm’s training dataset did not have pictures of enough black people.

In 2016, Russian scientists ran a global beauty contest to be judged by an AI. Of the 44 winners, only one had dark skin. The algorithm had been trained mostly with photos of white people, and it had equated “fair skin” with “beauty”.

A study of Google’s AI-driven advertising platform found that men were shown ads for high-paying jobs more often than women. Same with LinkedIn’s job ads.

AIs can also polarize society. On social media networks, deep-learning algorithms make sure that users are shown content that conforms to their preferences and biases. This creates a “filter bubble”.

I keep seeing opinions that resonate with mine, however loony they are, and over time, this makes me more isolated from and less tolerant towards opposing views. Social and political divides are deepened. This is how the Russian hackers cracked the 2016 US presidential elections.

And the more you use a biased AI, more biased data is created that the algorithms will use: a perfect feedback loop of insidious bias.

What if governments start using AI to take decisions on matters like resource allocation and national security? Politicians may lose power, or retire, but the AI (by now opaque in its complexity) will keep spewing out results, even though they may have calamitous consequences in the real world.

Mere technologists will never be able to solve the problem. More than technical, these are human, ethical and philosophical issues.

Some serious questions need to be answered before we jump, whistling and cheering, on the AI bandwagon.

Related Posts

Subscribe
Notify of
guest
5 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
5
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence