Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

ADDRESSING BIASES IN AI FOR IMPROVING ORGANIZATIONAL DIVERSITY

Source: analyticsinsight.net

Before moving forward, with the advancements of AI, it has become imperative to acknowledge the biases ingrained in the AI models.

The death of George Floyd, Breonna Taylor, and Ahmaud Arbery in the USA has raised many brows towards existing biases in society. The unfortunate events have not only made authorities and government to reckon about their systemic strategy towards this loophole of the society but has also made many organizations to take cognizance that AI might have similar biases.

With the recent announcement of Amazon’s halt of providing Facial Recognition kits to the US police department for a year, it is apparent that despite its advantageous use, biases in AI are perilous. IBM trailed the same like Amazon, by abandoning the study about Facial recognition.

Understanding the AI Biases

Any technology that helps in human advancements has some flaws. While it profits society, its defects can be precarious to the improvement of the society. And AI is not foreign to these flaws. It has, what many activists, organizations, and studies imply, a system that has put the society on a pedestal. That’s why it becomes imperative to acknowledge and understand these biases.

Facial Recognition is used in day to day life. Be it as a mobile phone application, or used by organizations to identify their employees, its use is vast, useful, and dangerous.

An MIT study on Gender Shades revealed that the AI-based facial recognition system developed by companies such as IBM, Microsoft, and Face plus are more inclined towards recognizing lighter-skinned individuals as compared to darker skin. It further revealed that all companies performed better on lighter skins as compared to dark-skinned individuals, having a difference of 19.2%. Also, the study suggests an error difference of 34.4% for lighter males and darker females.

The example of this type of facial recognition bias can be confirmed by citing the incident when a facial recognition system failed to recognize Oprah Winfrey and when a Brown student was misidentified as a suspect in Srilankan Blasts.

In another study, it is found that gender-based bias is another concern about using AI. The study states that the representation of males is more as compared to females in any AI-organization. Facebook has only 15% of female AI-staffer and Google employees less than 10% of female staff.

This comes at a point when organizations like the IMF and Pepsico are already headed by female representatives.

Reasons for Biases

Any AI-based program is based on a set of solutions that are referred to as algorithm. Data engineers, Data Scientists, and Data analysts are responsible for using this algorithm to plan out sensible strategies to use Artificial Intelligence.

However, most of the time the biases embedded in AI are unintentional and unconscious. Any AI-based model is governed by deep learning. When the said algorithm of deep learning is fed with recurrent entities, which are bias-inclusive, the system fails to recognize any new or under-represented structure.

For example, when a facial recognition model is tested by exhibiting the white-skinned male repetitively, it will fail to recognize the darker-skinned female. Such practices can lead to offensive classification based on gender, race, and minority.

Another bias that can be concerning for any AI-based model is tweaking of the data, according to the intentions of data scientists and data analysts. Such type of bias is known as Selection bias. This can lead to the presentation of untrue or biased results.

Lack of data within an algorithm can also generate biases within a system.

Solutions for AI-based biases

Just like any software disparities, the AI-based biases can be corrected by a strategical approach. By implementing a gender-neutral perspective, these biases can be meted out. This can be achieved by:

  • Applying Intellectual Diversity– All data-driven algorithms are governed by human intelligence. Thus implementing an intellectual diversity in the staff, will enable the system to pertain to academic discipline, higher risk tolerance, accept diversified political perspective, and will enhance the creativity and productivity of the organization. Intellectual Diversity would also help in recognizing and improving existing AI-based biases.
  • Developing an inclusive Software-  Developing an AI- model with the inclusiveness of gender and race, will ensure equal representation of individuals in the model. It will also ensure data collection by teams with diversified experience, backgrounds, ethnicity, and viewpoints.
  • Cross-checking the Algorithm-  Cross-checking an algorithm would help in identifying the pattern that is irrelevant, repetitive, and are unnecessarily intended. This will curb down the possibilities of the biases due to a lack of data.
  • Workspace Inclusiveness-  Organizations must ensure that all the employees are equally represented. It will help in uplifting of the under-represented group and evoke a sense of responsibility towards the organization. The Workspace inclusiveness must also include the presentation of discrimination feedback to plan out a strategical approach for solving the problem.

Related Posts

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence