Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Source:- telegraph.co.uk

Robotic artificial intelligence platforms that are increasingly replacing human decision makers are inherently racist and sexist, experts have warned.

Programmes designed to “pre-select” candidates for university places or to assess eligibility for insurance cover or bank loans are likely to discriminate against women and non-white applicants, according to recent research.

Professor Noel Sharkey, Co-Director of the Foundation for Responsible Robotics, said more women need to be encouraged into the IT industry to redress the automatic bias.

He said the deep learning algorithms which drive AI software are “not transparent”, making it difficult to to redress the problem.

Currently approximately 9 per cent of the engineering workforce in the UK is female, with women making up only 20 per cent of those taking A Level physics.

“We have a problem,” Professor Sharkey told Today.

“We need many more women coming into this field to solve it.”

His warning came as it was revealed a prototype programme developed to short-list candidates for a UK medical school had negatively selected against women and black and other ethnic minority candidates.

Professor Sharkey said researchers at Boston University had demonstrated the inherent bias in AI algorithms by training a machine to analyse text collected from Google News.

When they asked the machine to complete the sentence “Man is to computer programmers as woman is to x”, the machine answered “homemaker”.

A separate US built a platform intended to accurately describe pictures, having first examined huge quantities of images from social media.

It was shown a picture of a man in the kitchen, yet still labelled as a woman in the kitchen.

Maxine Mackintosh, a leading expert in health data, said the problem is mainly the fault of skewed data being used by robotic platforms.

“These big data are really a social mirror – they reflect the biases and inequalities we have in society,” she told the BBC.

“If you want to take steps towards changing that you can’t just use historical information.”

In May last year report claimed that a computer program used by a US court for risk assessment was biased against black prisoners.

The Correctional Offender Management Profiling for Alternative Sanctions, was much more prone to mistakenly label black defendants as likely to reoffend according to an investigation by ProPublica.

The warning came as in the week the Ministry of Defence said the UK would not support a change of international law to place a ban on pre-emptive “killer robots”, able to identify, target and kill without human control.

Related Posts

Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
2
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence