Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

This New Algorithm can Explain Artificial Intelligence (XAI)

Source – https://www.eletimes.com/

Researchers from the University of Toronto and LG AI Research have developed an “explainable” artificial intelligence (XAI) algorithm that can help identify and eliminate defects in display screens.

The new algorithm, which outperformed comparable approaches on industry benchmarks, was developed through an ongoing AI research collaboration between LG and U of T that was expanded in 2019 with a focus on AI applications for businesses.

Researchers say the XAI algorithm could potentially be applied in other fields that require a window into how machine learning makes its decisions, including the interpretation of data from medical scans.

XAI is an emerging field that addresses issues with the ‘black box’ approach of machine learning strategies.

In a black-box model, a computer might be given a set of training data in the form of millions of labeled images. By analyzing the data, the algorithm learns to associate certain features of the input (images) with certain outputs (labels). Eventually, it can correctly attach labels to images it has never seen before.

The machine decides for itself which aspects of the image to pay attention to and which to ignore, meaning its designers will never know exactly how it arrives at a result.

But such a “black box” model presents challenges when it’s applied to areas such as health care, law, and insurance.

For example, a [machine learning] model might determine a patient has a 90 percent chance of having a tumor. The consequences of acting on inaccurate or biased information are literally life or death. To fully understand and interpret the model’s prediction, the doctor needs to know how the algorithm arrived at it.In contrast to traditional machine learning, XAI is designed to be a “glass box” approach that makes decision-making transparent. XAI algorithms are run simultaneously with traditional algorithms to audit the validity and the level of their learning performance. The approach also provides opportunities to carry out debugging and find training efficiencies.

The first, known as backpropagation, relies on the underlying AI architecture to quickly calculate how the network’s prediction corresponds to its input. The second, known as a perturbation, sacrifice some speed for accuracy and involves changing data inputs and tracking the corresponding outputs to determine the necessary compensation.

There is a lot of potential in SISE for widespread application. The problem and intent of the particular scenario will always require adjustments to the algorithm—but these heat maps or ‘explanation maps’ could be more easily interpreted by, for example, a medical professional.

LG’s goal in partnering with the University of Toronto is to become a world leader in AI innovation. This first achievement in XAI speaks to our company’s ongoing efforts to make contributions in multiple areas, such as the functionality of LG products, innovation of manufacturing, management of supply chain, the efficiency of material discovery, and others, using AI to enhance customer satisfaction.

When both sets of researchers come to the table with their respective points of view, it can often accelerate problem-solving. It is invaluable for graduate students to be exposed to this process.

While it was a challenge for the team to meet the aggressive accuracy and run-time targets within the year-long project—all while juggling Toronto/Seoul time zones and working under COVID-19 constraints—Sudhakar says the opportunity to generate a practical solution for a world-renowned manufacturer was well worth the effort.

Related Posts

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence