Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

How Machine Learning is Influencing Diversity & Inclusion

Source: informationweek.com

Our society is in a technological paradox. Life events for many people are increasingly influenced by algorithmic decisions, yet we are discovering how those very essential algorithms discriminate. Because of that paradox, IT management is in an unparalleled position to select human intervention that addresses diversity and inclusion with a team and equitable algorithms that are accountable to a diverse society.

IT managers face this paradox today due to the increased application of machine learning operations (MLOps). MLOps rely on IT teams to help manage the pipelines created. Algorithmic systems involving IT teams need to be inspected with a critical eye for which outcomes can occur with social bias.

To understand social bias, it is essential to define diversity and inclusion. Diversity is an appreciation of the traits that make a group of people unique, while inclusion behaviors and norms make people from these groups feel welcome to participate in a given organization.

Social biases occur through two key processes when developing programmatic software or processes initiated from algorithm decisions. One source is the fragility inherent in machine learning classification methods. Models classify training data either through statistical clustering of observations or by creating a boundary that mathematically predicts how observations associate, such as a regression. The challenge occurs when those associations are declared without consideration of societal issues, exacerbating real world concerns.

Many biases exist within the commercial machine learning applications people use every day.  Researchers Joy Buolamwini and Timnit Gebru released a 2018 research study identifying how gender and skin-type bias exist in commercial artificial intelligence systems. Their research team conducted the study after discovering an error in which a facial recognition demonstration could only work with a light-skinned person.

A second source of systemic bias occasionally occurs during data cleansing. A dataset can have its observations classified such that it may not sufficiently represent the volume of real world features in statistically adequate proportions. The significant difference in observations leads to a condition of unbalanced datasets, in which data classes are not represented equally. Training a model on an unbalanced dataset can introduce model drift and produce biased outcomes. The potential scale of unbalanced datasets is broad, with conditions ranging from undersampled to oversampled data. Technologists over the years have warned that few publicly available datasets consistently gather representative data.  

As algorithmic models influence operations, executive leaders can incur liability, especially if the outcome involves the public. The price has become the risk of implementing an expansive system that reinforces institutional discriminatory practices.

A George Washington University research team published a study of Chicago rideshare trips and census data. They concluded that a fare bias existed depending on whether the neighborhood pick-up point or destination contained a higher percentage of non-white residents, low-income residents or high-education residents. This is not the first social bias discovery for commercial services.

In 2016, Bloomberg reported that the algorithm for Amazon Prime Same Day Delivery service, meant to suggest neighborhoods in which the “best” recipients live, overlooked African American neighborhoods in major cities, mimicking a long-standing pattern of economically redlined communities. Political leaders requested Amazon to adjust its service. The expansion of software and machine learning has increased demand for training people to correct model inaccuracies, especially when the cost from an error is high.

IT leaders and managers have a golden opportunity to substantially advance the quality of ML initiatives and the objectives for diversity and inclusion. IT executives can focus diversity metrics toward hiring for positions related to an organization’s machine learning initiatives. The advantage would raise the organization’s accountability for inclusion and diversify the personnel who recommend accountability tactics during the design, development, and deployment phases of algorithm-based systems.

Human in the loop

Imagine a team established to recommend models that should operate with a human-in-the-loop (HITL) protocol because of their potential societal impact. HITL combines supervised machine learning and active learning so that critical emotional intelligence is infused into effective decisions from a machine learning model. A team also could assist in the development of ensemble theory, applying multiple algorithms to coordinate several classifications to achieve an outcome.

Legislation against facial recognition, resulting from the advent of civil rights protests in response to the police brutality, has intrigued C-suite execs to consider how empathetic their organizations are regarding diversity issues. The work to be done will mean significant shifts will occur faster. Cisco just recently fired several employees for discriminatory comments made during an online townhall on race. Hope also abounds. Microsoft CEO Satya Nadella announced a diversity investment as an imperative to combat AI bias.  

Signs of public interest in better algorithmic fairness are emerging, such as the Safe Face Pledge initiative, an online call for companies to publicly commit toward mitigating the abuse of facial recognition technology. In addition to civil rights groups monitoring algorithm fairness, there is the Algorithmic Justice League, an organization dedicated to highlighting algorithmic bias and recommending practices to prevent discrimination for programmatic systems.

In the race to extract business value from algorithms, machine learning has linked ethics to product and service development. Picking the right responses to protect integrity will not be easy. But focusing on diversity and inclusion in filling the roles associated with machine learning can provide a way to spot troubling patterns and differences that can potentially exacerbatesocial bias. Championing the right diversity and inclusion choices is an essential reminder that ethics is never divorced from technology. IT management should embrace it as a way to influence the world for the better.

Related Posts

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence