Source: techcrunch.com
Paperspace has always had a firm focus on data science teams building machine models, offering them access to GPUs in the cloud, but the company has had broader ambition beyond providing pure infrastructure, and today it announced a new set of tools to help these teams pass the model off to developers and operations in a smoother way in a multi-cloud or hybrid environment.
Co-founder and CEO Dillon Erb says this an attempt to provide a full tool set for data scientists and developers, beyond providing pure GPU power to test and build the models. “Machine learning teams do a lot of GPU work — and as you know, we’ve been working with GPUs for a number of years now, and that’s one of our specialties. Now what we’re doing is taking a kind of agile methodology approach or CI/CD (continuous integration/continuous delivery) for machine learning, and using that to solve much larger scale [machine learning] problems,” Erb said.
As the company describes it, “The new release introduces GradientCI, the industry’s first comprehensive CI/CD engine for building, training and deploying deep learning models…” Erb says the goal is to provide a way to take the model built on top of Paperspace and put it to work in the company faster. Teams often lose time after the model is built because there is no good way to pass the model onto the DevOps team to use in applications.
“GradientCI lets you do things like set up staging development production environments, and provide a common interface between your data team and your DevOps team. This is about taking software development best practices, and applying that to this relatively new universe of training and deploying machine learning models,” Erb explained.
He says that up until now, there hasn’t been a way to do this, and this has led to the development team doing things like completely rewriting the model in Java or whatever their production environment is to make it work inside their applications. “It’s been a really clunky handoff, and where we do a really good job is adding things like version control and reproducibility and a common kind of syntax, so that the traditional DevOps guys can actually pick up and deploy the machine learning tool stacks without being a deep learning expert,” Erb explained.