Modern applications are transitioning away from the monolithic approach, opting to deploy microservices that collectively contribute to the overall functionality. There are several ways to deploy microservices catering to the needs of every application and organization. When deploying a microservice the typical questions that arise are whether it should be on-prem or cloud? Do we need containers? How the services would communicate with each other and so on. Choice of deployment method depends on factors like size, flow of traffic, frequency of events, and scaling requirements.
Popular methods of deploying microservices
The top 5 ways to deploy microservices are:
Individual host for single instance
In this method each instance has separate hosts or VM for them. This pattern allows separating microservices from each other. This pattern reduces the resource consumption by individual instances. In case of virtual infrastructure the whole service is packaged as a VM image and deployed in separate machines. This is the traditional VM deployment method.
Multiple instances on one host or VM
In this deployment method only one host or VM is present for the entire application with multiple services residing inside. The host infrastructure can be an on-prem server or a cloud-native platform. This is the most lightweight means of microservice deployment for small applications at a limited cost.
The challenges of such a deployment when done on-prem is that it is not scalable as practically more and more servers need to be added while the traffic or application scales. The biggest challenge on cloud is that one service is not properly isolated from the other. Also use of VM on cloud means replication and storage of the same OS for every instance and paying up more towards license fee of every replica of OS and redundant consumption of space.
Containerization
Containers virtualizes multiple applications runtime on the same kernel of an OS. This drastically reduces the cost of microservices deployment on cloud. Each microservices here run independently with no shared dependencies. This enables flexibility to scale microservices as required without interrupting the operation of another.
Container Orchestration
Container orchestration platforms like Kubernetes takes away the manual efforts of containerizing instances by automating the process. In manual containerization approach lot of things need to be factored in, for example:
- Figuring out how to start the right container at the right time
- Handling system resource usage and storage processes
- Creating communication channel between services
- Managing failed containers and hardware
Orchestration platforms take care of all this and add on features like routing, security, load balancing, and maintaining centralized logs. For companies heavily dependent on containers orchestration is the proverbial silver bullet.
Serverless deployment
Deployment of microservices as serverless function completely leveraging cloud native solutions like AWS Lambda, Azure functions are increasingly getting popular. This kind of platform provided by a public cloud vendor automates the entire deployment process. They come with all the necessary tools to create a service abstraction through a set of highly available instances. Such kind of setup works on the pay-as-you-go model and relieves the development team from operating and managing the pre-allocated resources (physical or virtual servers, hosts, containers) and focus on coding.
Patterns to Deploy Microservices
Rolling Deployment
Rolling deployment strategy follows gradual and controlled deployment of a new version of an application. A new replica set of service is created and traffic is gradually redirected to the new environment. Once all the traffic is moved to the new version the old version is removed from the server.
Blue Green
In blue green method an identical service copy of a running service is created. The old version is called blue and the new one is green and both are of the same capacity. The traffic is gradually transferred to the green environment while the blue environment is running, with the help of a load balancer. Once the traffic has moved to the green environment the blue environment is kept as standby or updated to serve as a template for the next update. The blue green approach follows the CI/CD process eliminating disruption to the end user during cutover.
Canary
Canary rollouts are automated process implemented through a continuous delivery pipeline. In this method a small amount of users are directed to a new service rollout. All the infrastructure in the targeted environment are updated in phases and the traffic gradually switched to the new environment eventually going for a complete sweep. This method of deployment is least risky and well suited for complex distributed systems whose failure can have devastating impact.
Conclusion
The ideal method to deploy and run microservices varies with the need and scale of an application. For a small application usually designed for internal functions deploying on a single server seems a good starting point of exploration. Whereas, large cloud-native applications running on thousands of containers will take the DevOps path of continuous deployment and a container orchestration tool is indispensable.