Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

4 Things to Know About Using Kubernetes for AI

Source: rtinsights.com

Many companies are starting to embed continuous intelligence (CI) using artificial intelligence (AI) and machine learning (ML) into business processes. And the trend is expected to continue. Gartner notes that by 2022, more than half of major new business systems will incorporate CI that uses real-time context data to improve decisions. This will require new thinking about how such applications are developed, staged, deployed, and integrated with other applications. One technology that is likely to play a significant role in such efforts is Kubernetes. Here are four things to know about using Kubernetes for AI.

1) What are containers, and where does Kubernetes fit in?

Developing, deploying, and maintaining AI applications requires a lot of ongoing work. Comparable to the way virtual machines and virtualization helped simplify the management of compute workloads, containers are helping businesses more easily create, deploy, and scale cloud-native AI applications.

Containers offer a way for processes and applications to be bundled and run. They are portable and easy to scale. They can be used throughout an application’s lifecycle from development to test to production. They also allow large applications to broken into smaller components and presented to other applications as microservices.

Just as virtual machines used a hypervisor, containers can make use of Kubernetes, which is an open-source platform to automate the deployment and management of containerized applications. Specifically, Kubernetes provides service discovery and load balancing, storage orchestration, self-healing, automated rollouts and rollbacks, and more.

2) How are containers and Kubernetes used for AI?

Many businesses are moving to container-based microservices architectures to develop modern applications, including AI. The result is an explosion in the number of containers to manage and maintain. It is not unusual to find situations where thousands of container instances are deployed daily. Those containers must be managed and scaled over time.  

Thus, the need for Kubernetes. Kubernetes has been embraced by the industry as the dominant solution for container orchestration. Many consider it the de facto standard for container orchestration. Groups like the Cloud Native Computing Foundation (CNCF), which is backed by Google, AWS, Microsoft, IBM, Intel, Cisco, and Red Hat, have been Kubernetes proponents for years. For example, in 2017, the foundations launched the Certified Kubernetes Conformance Program, which has the goal of ensuring portability and interoperability across the Kubernetes ecosystem.

What exactly does Kubernetes do? It makes it easier to deploy and manage microservices architecture applications. In particular, it helps businesses manage aspects of an application’s operations. For instance, Kubernetes can be used to balance an application’s load across an infrastructure; control, monitor, and automatically limit resource consumption; move applications instances from one host to another, and more. 

3) How is the use of containers and Kubernetes different from other deployment methods?

Containers and Kubernetes are, in a way, the next step in the evolution of application deployment. Just as businesses moved from deployments on physical servers to virtualization and hypervisors, containers and Kubernetes offer a deployment method that is well-suited to the needs of today’s cloud-native, microservices architectures. 

Whereas every virtual machine runs all the components of an application, including its own operating system (OS), on virtualized hardware, a container shares the OS with different instances but has its own filesystem, CPU, memory, and process space. However, they share the operating system among different applications. A significant feature is that containers are decoupled from the underlying infrastructure. This allows them to be portable across clouds and OS distributions.

4) How do instances work together?

A microservices architecture requires a service mesh for different microservice instances to communicate with each other and work together. This service mesh is essentially an infrastructure layer that lets businesses connect, secure, control, and observe their microservices. At a high level, a service mesh helps reduce the complexity of development and deployments.

The service mesh also lets businesses configure how service instances perform critical actions such as service discovery, load balancing, data encryption, and authentication and authorization.

Today, a leading choice for a service mesh is Istio. It was developed in a collaboration between Google, IBM, and Lyft. Istio is an open-source service mesh that lets businesses connect, monitor, and secure microservices deployed on-premise, in the cloud, or with orchestration platforms like Kubernetes.

Kubernetes isn’t the only way to deploy microservices, and Istio isn’t the only service mesh, but many leading firms, like Google and IBM, feel the two are increasingly becoming inseparable. Use of Istio and Kubernetes offers great flexibility allowing a business to move a microservice to a different host without having to rewrite the application. In essence, a service mesh automates the work of managing microservices for AI applications.

Related Posts

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Artificial Intelligence