Source – diginomica.com
Deep learning, the branch of AI that uses artificial neural networks to build prediction and pattern matching models from large datasets relevant to a particular application, is having a sizable impact on both consumer and enterprise software.
Whether for enabling home appliances to understand and respond to vocal commands or identifying hidden patterns endemic to all malware, deep learning algorithms allow machines to mimic and even improve upon human cognition in ways that are impossible with imperative or declarative programming.
Unfortunately, developing deep learning software isn’t easy since the models are customized for a particular use. Indeed, developing models is more like making a custom-fitted suit, not off-the-rack clothing in standard sizes.
Deep learning encompasses a large category of software, not a general-purpose solution, and describes a broad range of algorithms and network types, each better suited to particular types of problems and data than others. For example, convolutional neural networks (CNNs) based on the synaptic connections between neurons in the brain, are extremely effective at image recognition, exceeding 90% accuracy at identifying objects in a standard image dataset used by most developers.
However, the same type of model would be ineffective at learning to make the decisions required to play a strategy game like chess or go. Instead, models based on reinforcement learning that reward correct or good choices and penalizes bad choices are better at completing complicated tasks.
The most advanced applications of AI, such as robots used for multi-purpose manufacturing or logistics tasks will require several deep learning applications, such as a CNN for vision and object identification, recurrent neural networks (RNNs) for speech recognition and reinforcement learning for task completion.
The diversity of models, each requiring specialized AI development knowledge and domain expertise, limits the use of deep learning to organizations with the R&D budget and time horizon needed to produce software tailored to a particular problem.
We are still decades from Star Trek-style artificial general intelligence that could pass the Turing test or outperform humans on a gamut of unrelated cognitive tasks.
In the meantime, a promising compromise would be the ability to automate model selection and tuning based on the problem and available data, and then select the best options from a portfolio of deep learning software each designed for different applications.
That’s the promise of a new category of AI orchestration software like Conductor from Veritone, self-optimizing AI engines such as SigOpt (which I discussed in this column) and so-called AutoML systems such as those that competed in ChaLearn Automatic Machine Learning Challenge.
Meta machine learning: using AI to select AI algorithms
Simply put, “machine learning remains a relatively ‘hard’ problem,” writes Stanford AI researcher S. Zayd Enam. As he points out (emphasis added),
An aspect of this difficulty involves building an intuition for what tool should be leveraged to solve a problem. This requires being aware of available algorithms and models and the trade-offs and constraints of each one. By itself this skill is learned through exposure to these models (classes, textbooks and papers) but even more so by attempting to implement and test out these models yourself.
Enam also notes that debugging and optimizing ML is “exponentially harder” than conventional software both in the difficulty of figuring out what went wrong and the time required to train and execute the models.
Here, we would add that the black-box nature of deep learning networks, which makes them impossible to reverse engineer to understand how they arrived at a decision, only compounds the difficulty.
The emerging field of AutoML, which broadly consists of algorithm selection, hyperparameter tuning, iterative modeling, and model assessment, builds a meta-layer of abstraction on top of ML that can be used to automate model development and optimization.
Facebook’s FBLearner Flow is one such attempt at building a general-purpose platform that can automatically improve ML accuracy. As described on the Facebook Code blog,
Many machine learning algorithms have numerous hyperparameters that can be optimized. At Facebook’s scale, a 1 percent improvement in accuracy for many models can have a meaningful impact on people’s experiences. So with Flow, we built support for large-scale parameter sweeps and other AutoML features that leverage idle cycles to further improve these models. We are investing further in this area to help scale Facebook’s AI/ML experts across many products in Facebook.
Automated algorithm selection using an ecosystem of AI models
Chad Steelberg, a serial entrepreneur whose latest startup is Veritone, saw deep learning algorithm selection as a software opportunity when he realized that the types of problems he was tackling in audio/video content categorization couldn’t be solved with a single AI model.
According to Tyler Schulze, head of Veritone’s budding partner ecosystem, there are already over 5,000 commercial machine-learning algorithms targeting increasingly narrow niches. As he points out in this blog,
The transcription segment includes general-purpose solutions for converting speech-to-text, alongside algorithms that are designed for much more narrow uses, such as taking dictation of Spanish phrases or medical terms. All these engines get stamped with the transcription moniker, despite their radical variances in capabilities.
The premise behind Veritone is that the accuracy and efficacy of deep learning can be significantly improved by mixing and matching various algorithms for a particular problem.
As Steelberg explained in an interview, imagine that you are developing a general-purpose transcription engine that handles ordinary conversation quite well, but stumbles on specialized jargon such as medical or legal terminology. Suppose the system could call for help when it ran into words it couldn’t translate but could identify as belonging to a particular class of knowledge; medicine, pharmacology, astrophysics, corporate finance, whatever. By using an ensemble of models, the system’s overall accuracy would be significantly better. That’s the theory behind Veritone Conductor.
According to Steelberg, “Conductor chooses the best engines for each job, combining them where needed.” In tests on natural language processing, he says the best general purpose engine on its platform achieves 75% accuracy.
By combining multiple language engines and automatically selecting the ‘best’ one based on the input data, Conductor improves the overall system accuracy by 7 points, a significant achievement in a field where, as Facebook notes, even a 1% increase is meaningful.
Veritone currently has about 70 ML engines in its portfolio covering 7 categories targeting five industries or problem areas: media/advertising, politics, legal, law enforcement and government agencies/intelligence.
Schulze’s job is to expand the ecosystem by encouraging developers to integrate their models using Veritone’s APIs, contributing the required metadata describing model function and required data and building easily-deployed container images that can be used by the Conductor platform.
My take
As the number of ML and, particularly deep learning models explodes, automation systems will be required to expand usage beyond the relatively small number of organizations with the requisite AI and data science expertise to create and tune them.
Moving AI from an artisanal phase of handcrafted models to that of an automated production line with reusable ML ‘widgets’ will enable enterprises of all sizes and industries to exploit the power of AI to improve their products, services and business processes.
Whereas cloud services like Azure Cognitive services, Google Cloud ML Engine and similar offerings from AWS and IBM are democratizing infrastructure for AI specialists, these do little for the typical business application developer and systems analyst.
Instead, they need access to packaged AI algorithms and models that can be consumed, combined and optimized like programmable SaaS applications. The emerging field of AutoML along with systems like Veritone Conductor are promising steps towards broadening the use and effectiveness of ML and deep learning software.