Source: machinedesign.com
Artificial Intelligence (AI) gets plenty of attention these days, but one researcher at the U.S. Naval Research Laboratory believes one particular AI technique might be getting a little too much.
“People have focused on an area of machine learning—deep learning (aka deep networks) — and less so on the variety of other artificial intelligence techniques,” says Ranjeev Mittu, head of NRL’s Information Management and Decision Architectures Branch. He has been working on AI for more than 20 years. “The biggest limitation of deep networks is that we still lack a complete understanding of how these networks arrive at solutions.”
Deep learning is a machine learning technique that can recognize patterns, such as identifying a collection of pixels as an image of a dog. The technique involves layering neurons together, with each layer devoted to learning a different level of abstraction.
In the dog image example, the lower layers of the neural network learn primitive details such as pixel values. The next set attempts to learn edges; higher layers learn a combination of edges such as those that form a nose. With enough layers, these networks can recognize images nearly as well as humans.
But deep learning systems can be fooled easily just by changing a small number of pixels, according to Mittu. “You can have adversarial ‘attacks’ where once you’ve created a model that recognizes dogs by showing it millions of pictures of dogs, but making changes to a small number of pixels, the network may misclassify an image as a rabbit, for example.”
The biggest flaw in this machine learning technique, according to Mittu, is that there is a large amount of art to building these networks, which means there are few scientific methods to help understand when they will fail.
“Although deep learning has been highly successful, it is also currently limited because there is little visibility into its decision rationale. Until we truly reach a point where this technique becomes fully “explainable”, it cannot inform humans as to how it arrives at a solution, or why it failed. We have to realize that deep networks are just one tool in the AI tool box.”
He stresses that humans have to stay in the loop. “Imagine you have an automated threat-detection system on the bridge of your ship and it picks up a small object on the horizon,” Mittu says. “The deep network classification may indicate it is a fast attack craft coming at you, but you know a small set of uncertain pixels can mislead the algorithm. Do you believe it?
“A human will have to examine it further,” he continues. “There may always need to be a human in the loop for high-risk situations. There could be a high degree of uncertainty, and the challenge is to increase the classification accuracy while keeping the false alarm rate low. It is sometimes difficult to strike the perfect balance. ”
When it comes to machine learning, the key factor, simply put, is data.
Consider one of Mittu’s previous projects: analyzing commercial shipping vessel movements around the world. The goal was to have machine learning discern patterns in vessel traffic to identify ships involved in illicit activities. It proved a difficult problem to model and understand.
“We cannot have a global model because the behaviors differ for vessel classes, owners, and other characteristics.” he explains. “It is even different seasonally, because of sea state and weather patterns.”
But the bigger problem, Mittu found, was the possibility of mistakenly using poor-quality data.
“Ships transmit their location and other information, just like aircraft. But what they transmit can be spoofed,” Mittu said. “You don’t know if it is good or bad information. It is like changing those few pixels on the dog image that causes the system to fail.”
Missing data is another issue. Imagine a case in which you must move large numbers of people and materials on a regular basis to sustain military operations, and you’re relying on incomplete data to predict how you might act more efficiently.
“The difficulty comes when you start to train machine learning algorithms on poor quality data,” Mittu says. “Machine learning becomes unreliable at some point, and operators will not trust the algorithms’ outcomes.”
Mittu’s team continues to pursue AI innovation,s and they advocate an interdisciplinary approach to employing AI systems to solve complex problems.
“There are many ways to improve predictive capabilities, but probably the best-of-breed will take a holistic approach and employ several AI techniques and strategically include the human decision-maker,” he says.
“Aggregating various techniques (similar to ‘boosting’), which may ‘weight’ algorithms differently, could provide a better answer. By employing combinations of AI techniques, the resulting system may also be more robust to poor data quality.”
One area Mittu is excited about is recommender systems. He says most people are familiar with these systems, which are used in search engines and entertainment applications such as Netflix.
“Think of a military command-and-control system where users need good information to make good decisions,” he says. “By looking at what the user is doing in the system within some context, can we anticipate what the user might do next and infer what data they might need?”
Although the field of AI offers almost limitless potential for innovative solutions to today’s problems, Mittu notes that researchers obviously have many years of work ahead of them.