Source: containerjournal.com
Domino Data Lab is making the case for a multi-cloud approach to building and deploying applications infused with machine learning algorithms now that its platform runs on Kubernetes.
Company CEO Nick Elprin says that as organizations move to employ machine learning algorithms to build various types of applications, many of them don’t appreciate the extent to which relying on proprietary services is locking them into a “walled garden” that only runs on a specific cloud computing platform. Many of those same organizations may even wake up one morning to discover they are suddenly now competing with Amazon, Google or Microsoft, all of which are rapidly expanding the type of services they provide based on machine learning algorithms, he notes.
By opting to build machine learning models on a platform provided by Domino Data Lab, organizations can deploy those models on any public cloud or on-premises IT environment as they best see fit, Elprin says. Longer-term, Domino Data Labs is betting most applications employing machine learning algorithms also will be likely to span multiple clouds, he adds.
The Domino Data Science Platform provides what amounts to an opinionated workbench for building models based on machine learning algorithms. That approach provides data scientists with the equivalent of a set of best DevOps practices for infusing machine learning models into their applications in way that makes it easier to explain how processes are being automated, says Elprin.
Most machine learning models are being constructed using containers because they allow data scientists to build them in a modular fashion. Otherwise, the models being constructed would be too unwieldy, given the massive amounts of data required. As such, models constructed using machine learning algorithms are natural candidates to be deployed on Kubernetes clusters.
Adding support for Kubernetes to the Domino platform also will make it easier for organizations to bridge the current divide between how machine learning models are constructed and trained and how applications are built and deployed using best DevOps practices. Most organizations underestimate how frequently machine learning models will need to be updated as new data sources become available, In addition, machine learning models are liable to drift over time or as business conditions change. They also underestimate how challenging it might be to insert updated machine learning models into applications that already are running in a production environment.
Of course, there’s a world of difference between infusing machine learning algorithms into an application and true artificial intelligence (AI). What most data science teams are doing is training machine learning algorithms to automate a very narrow range of processes by enabling machines to learn how a specific process works. That’s a subset of AI that is a far cry from building a system capable of learning how a set of processes works on its own. Given all the hype, however, most business leaders are keen to fund any AI project regardless of any nuance in the nomenclature being applied. However, as far as IT leaders are concerned, the next big challenge isn’t necessarily about AI as much as it will be managing all the different machine learning models that they will be asked to embed within almost every application they deploy.