Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

AREN’T ARTIFICIAL INTELLIGENCE SYSTEMS RACIST?

Source – https://www.analyticsinsight.net/

No wonder, Artificial Intelligence is the future. We’ve seen its application in possibly every field now. The problem isn’t with the technology, it is with the biasness that goes in, says Timnit Gebru. She goes on to add that it is built in a manner that replicates the white work force that’s mostly men-dominated making it. Right from her first lecture in Spain which is by far the world’s most important conference on AI till date, she has seen a vast difference in the number of men and women, obviously men being dominant in number. She highlights two things that she believes remained constant over a period – one, how technologically advance we’re becoming with every passing day and two, how bias the work culture is but the companies fail to acknowledge.

Later, Dr. Gebru co-founded an organization, Black in AI, a community of black researchers working in artificial intelligence. She completed her Ph.D. and was then hired by Google. It was during this time that she told Bloomberg News how AI suffers from what she called “sea of dudes” problem. This left everyone stunned. She talked about how she worked with hundreds of men over a period of 5 years and the number of women could be counted on fingers.

It just doesn’t end here. A few years back, a New York researcher saw how biased AI was against Black people. An incident wherein a Black researcher learned that an AI system couldn’t identify her face till she had put up a white mask raised eyebrows.

Amidst all this, Dr. Gebru was fired. She said that this was an aftermath of her criticism against Google’s minority hiring. When Dr. Mitchell defended, Google removed her too without leaving any comments. Now, this sparked arguments among the researchers and tech workers.

Things got worse when image recognition was what Google tried its hands on. The AI model was trained to categorize the photos on what was pictured – for example dogs, birthday party, food, etc. But, this is when one user saw a folder named “Gorillas”. On opening the same, he found about 80 photos that he had clicked with a friend during a concert. His friend was black. The point of discussion is that this AI model is trained by engineers who choose data.

Yet another case on the same lines is that of Deborah Raji, a black woman from Ottawa. She worked for a start-up and once she saw a page filled with faces. The company uses theses faces to train its facial recognition software. She kept scrolling only to find more than 80% images were of white people and more than 70% of those were men. She was working on a tool that’d automatically identify and remove pornography from images people posted to social networks. The system was meant to learn the difference between the pornographic and the anodyne. This is where problems stepped in. The G‑rated images were dominated by white people but pornography was not. This is why the system was beginning to identify Black people as pornographic. This is why choosing the right data matters and since the ones who chose this data were mostly white men, they didn’t find anything wrong with this.

Before working for Google, Dr. Gebru joined hands with Joy Buolamwini, a computer scientist at the MIT. Ms. Buolamwini, who is Black, too faced biasness when she was working. She narrated her experience quite a few times when an AI system recognized her face only when she wore a white mask.

During the later years, Joy Buolamwini and Deborah Raji joined hands to test the facial recognition technology from Amazon. It marketed its technology under the name Amazon Rekognition. They found that Amazon’s technology too faced difficulties while identifying the sex of female and darker-​skinned faces. Later, Amazon called for government regulation of facial recognition. The company did not step back from attacking the researchers both in private emails and public blog posts.

Later, Dr. Mitchell and Dr. Gebru came up with an open letter wherein they rejected Amazon’s argument and called on it to stop selling to law enforcement.

Dr. Gebru and Dr. Mitchell had struggled a lot to bring the change in the organizations that they were working with. But, that didn’t pay off.

Dr. Gebru came up with a research paper that she wrote with six other researchers, including Dr. Mitchell. The paper talks about a system built by Google that supports its search engine and how it can show bias against women and people of colour.

Related Posts

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence