Source – enterprisersproject.com
You can operate microservices without containers. However, microservices and containers are a dynamic duo, the IT equivalent of milk and cookies.
“Microservices – single-function services built by small teams, independent from other functions, and communicating only through public interfaces – simply make a great match for containers,” as Red Hat technology evangelist Gordon Haff recently noted. “Microservices plus containers represent a shift to delivering applications through modular services that can be reused and rewired to perform new tasks.”
“Containerizing services like messaging, mobile app development and support, and integration lets developers build applications, integrate with other systems, orchestrate using rules and processes, and then deploy across hybrid environments,” Haff says.
The end result: Faster and easier development, and less error-prone provisioning and configuration. “That adds up to more productive – and hopefully, less stressed – developers,” Haff notes.
As with any effort like this, you’ll want do your homework first.
“Containers are a powerful tool for increasing agility, but only if done right,” says Mike Kavis, VP and principal cloud architect at Cloud Technology Partners. “The combination of microservices, containers, and container orchestration engines can allow companies to bring features to the market rapidly while still maintaining or even improving quality, reliability, and resiliency,” Kavis says.
Read on for what you need to know before you start sprinting toward a production environment. Kavis and other experts shared advice on doing it right in our increasingly hybrid IT world.
1. You’ll need some new processes and tools
Your existing processes and toolchain probably aren’t going to cut it when you move to container-based microservices, according to Kavis. So do some real planning.
“Although it is very easy to deploy a container, much thought should be put into the operations of these systems,” he says. That thought should occur to you before a single container hits production, or you could have an operational mess on your hands, he says.
“Monitoring and managing a highly distributed and auto-scaling container-based system requires more modern tools than many are accustomed to using in the data center,” Kavis explains.
Kavis and others point out that this becomes more true – and more complex – as your container-based microservices architecture scales over time. Failing to properly plan ahead means that you’re more likely to, well, fail.
So let’s dig deeper on these best practices, tools, and other tips necessary for success.
2. Orchestration is a must…
“You won’t get very far before you’ll need a cluster scheduler/container orchestration system like Kubernetes to deploy and manage your containerized microservices,” says Nic Grange, CTO at Retriever Communications. Grange notes that you can opt for a hosted version from a public cloud provider or run the tool yourself in a private cloud or on-premises data center, depending on your preferences and constraints. But he advises choosing a cloud-agnostic tool – especially for IT leaders managing hybrid cloud or multi-cloud environments.
“The benefit of choosing a cloud-agnostic system like Kubernetes is that it can run in each of those options [public, private, on-premises] so you won’t lock yourself in,” Grange says.
3. …and so is service discovery
Here’s another one just about everyone seems to agree on: Container-based microservices almost immediately introduce the need for service discovery. Hard-coding IP addresses and hostnames? Don’t even think about it.
“Services need to register themselves and look up other services in a dynamic, automated fashion. Hardcoded IPs, server names, and even URLs will break,” says Kevin McGrath, senior CTO architect at Sungard Availability Services. “When a service launches, it should be able to register where it is, what it is, and how to use it. Other services need to be able to query this information as it will change during replacements.”
4. Start with a greenfield project
If it’s your first foray in containers and microservices, start with a brand-new request or project, McGrath advises. This will increase the likelihood that you’ll set a high standard for a broader container-based microservices implementation, because you’ll hopefully have to make fewer of the “good enough” trade-offs required when managing legacy, monolithic applications, he says.
“It’s tempting to start chopping up a monolith as the first project out of the gate, but a small greenfield project will provide the latitude to design the microservice without legacy restrictions,” McGrath says. “This will be the target that other projects should aim to become. When starting with a monolith it is easy to make concessions early that then work into every project that follows.”
For more on how your peers are getting started using containers, see our related article, 4 container adoption patterns: What you need to know.
5. One microservice: one container
A fundamental idea underpinning microservices architecture is that a microservice should do one thing and do it exceedingly well; when deployed in containers, a similar ratio applies.
“The optimal way to scale microservices in containers is to deploy only one service per container,” Kavis says.
Containers are commonly referred to as “lightweight,” “lean,” or with similar adjectives – but you must ensure they stay that way. They’re not “free.”
“Microservices allow you to deliver your application to market in a shorter amount of time, but you’ll still need to provision resources for your microservice to run including the compute and memory to execute,” says Kong Yang, head geek at SolarWinds. Heeding the 1-to-1 rule helps realize some of the significant potential of using microservices and containers together, he says.
“Because microservices are so short-lived, running them in lightweight and portable containers make sense since VMs would be over-provisioned for your needs,” Yang says. “In other words, the symbiotic nature of microservices and containers means you can quickly provision infrastructure services, let the microservice run, and then de-provision the container to retire it cleanly.”
6. Make container security a priority
“Because containers contain system-specific libraries and dependencies, they’re more prone to be affected by newly discovered security vulnerabilities,” as Red Hat’s Ashesh Badani, VP and general manager, OpenShift, noted in a recent article. “Trusted registries, image scanning, and management tools can help identify and patch container images automatically.