Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Evolving Deep Learning: The Implications of Kyndi and its Explainable AI Technology

Source: news18.com

Deep learning is everywhere. The catchphrase that powers today’s world of technology has immense implications — from teaching robots how to bluff, to learning complex musical construction and automating human-level language processing. In fact, artificial intelligence and machine learning are to our generation what the silicon chip was back in the ‘70s. However, while all that is great, there is a blind spot that Ryan Welsh spotted, one which had a rubber band effect on deep learning.

Explaining the advanced

It is this that gave birth to Kyndi, and an evolved deep learning model that Welsh and his team called ‘Explainable AI’. The technology, as Welsh explains in a conversation with News18, is about taking deep learning and AI algorithms from the ‘data’ stage to the ‘knowledge’ stage. How this differs is actually very simple — as Welsh tells us, “Statistical machine learning techniques are good at learning from data, but are not very good at reasoning. Knowledge-based AI approaches are good at reasoning, but cannot learn from data. Our Explainable AI software combines the two, so you have a system that is good at learning from data, and also good at reasoning. Thus, you get superior data efficiency, generalisation, and explainability compared to just using deep learning.”

This can have significant implications in fields such as legal affairs, market research, business analysis and development, education, insurance, and so on. How Kyndi’s approach differs is that it takes the abilities of a deep learning model and turns it into a higher level of information processing. The best example for this can be found in the impact that Explainable AI can have on a business analyst’s role. They call it IPA, or Intelligent Process Automation (IPA).

What it does, and who it is for

As Welsh explains, “For IPA, the user is the business analyst that has to read, analyze, and synthesize data. The beneficiary of the IPA process is the manager or VP level employee who uses the output of the business analyst to track performance/progress of a business process. That is, the manual process of being an analyst (e.g., reading, analyzing, and synthesizing data) is automated.”

In fact, Welsh further revealed that the advanced cognitive abilities of Explainable AI can be further used in the niche area of automating arts such as writing, or Natural Language Generation (NLG) in technical terms. He says, “It can be used for extractive and abstractive NLG. Although we do not use it for that yet. We currently use it for natural language understanding and reading comprehension which is the opposite of NLG. This is taking an idea and generating the appropriate text to convey that idea, whereas NLU/Reading Comprehension is about taking the text and trying to understand what idea is trying to be conveyed.”

In effect, think of this as the technology that replaces your business development analyst, who so far used deep learning tools to analyse company and market data, and present you with structured data sets complete with action points that can be directly implementable. Instead, Kyndi’s Explainable AI will do this, while also processing data that has been left behind. As Welsh states, “We build systems that read. Unstructured data is not in a format, so it has to be transformed into that format or it has to read by a human. We’ve built machines that can read, so we use them instead of humans. So, instead of leveraging only 20 percent of your structured data, you can now leverage 100 percent, with the other 80 percent being unstructured.”

Thinking responsibly

However, with such powers, come certain responsibilities. This brings us to two very important issues that need to be addressed for Explainable AI — dealing with bias, and setting legal precedence. From the way Welsh sees it, the very model of Explainable AI actually helps resolve bias that may be created as a result of deep learning algos. “If a deep learning system reviewed 100 resumés and recommended 10 for interviews, the user has cannot ask the system why it chose those 10. Whereas if a human chose 10, the user could as the human why they chose the 10 and the human would give an explanation. If the human said they chose the 10 because they are all white males, then the user can judge for themselves whether or not that is the criteria they want to filter for. Most people will conclude that to be sexist, and won’t use the results. But with a deep learning system and no explanation, you have no way to understand why the system generated the results,” he says. It is this that Explainable AI, which essentially explains to you the methodology used, aims to change.

As for setting precedent through technology, Welsh says, “The system is presenting outputs to users who then use that information to make decisions. Thus, it will not make a decision that sets precedent, rather the human always makes the decision, and the system is just an amplifier of their productivity. The reason you give explanations is so the user can evaluate the output of the system and decide for themselves it that aligns with their beliefs and should be considered in their final decision.” It is important to maintain this difference, for handing robots the ability to judge a legal trial will require a whole world of regulatory upheaval, and call for tertiary technologies to control such elements from the aspects of bias.

The next frontier

Kyndi’s Explainable AI model holds unlimited potential for how it can change the way advanced technology is implemented today. While today’s technology helps process the data, and the rest of the process is either human-driven or preset through human conditions, Explainable AI aims to evolve this to the next step, wherein the human-level effort would lie only in implementing the knowledge. This can, for instance, drastically reduce medical insurance processing times, upgrade the quality of home-based assisted schooling, trim inefficiencies in big corporate houses, and so on.

In the long run, Welsh hints at the possibility of Explainable AI also being implemented to create written documents through machine-generated language, or generally improve the quality of deep learning techniques used in areas such as fact checking through social media-spread misinformation campaigns. As Welsh states, “It is a knowledge revolution”.

Related Posts

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence