Microservices needs a lot of small services all working in tandom to provide a flawless experience. These are independent units which are self sufficient to provide a functionality however each functionality compliments other and make a complete, useful solution. For details on Microservices read my other blog.
But with so many of services, deployable units there comes the overhead of their isolation, maintenance, efficient use of CPU and Memory, deployments, networking..... and a lot of things more related to infrastructure, devops and sysops.
Docker is the technology to create containers and how containers help us in managing all these complexities is what I am going to talk about here.
With docker image, all the required OS level setup is packed along with a thin OS layer and service.
First complexity is deployment. With containers, deployment is just one command. Development team builds an image, test it locally and push it to registry. After that its just one line of command "docker run" or "docker pull" to setup a new environment, not only that if you have an existing environment and want to update it with latest version its same additionally you just have to run "docker stop" to switch off existing version.
Another complexity is managing different environments, that is solved by dynamically creating and changing environment. You run above commands on baremetal machines or VMs and you get your environment up. Same infrastructure can be used to create different environments. This is why it is extremely useful for product based companies for support. They can deploy any prebuilt image with required underlying OS and configuration and check for bugs and its patches. No need to recreate environment again and again.
Next is Isolation, your services are all individual and if you patch one service, you don't want to modify, restart and test other services. You can achieve this by just updating specific container. Though you can achieve the same with other ways like having a separate process for each service but that still brings you to having different VMs or machines for isolation so that any change at infra level does not impact other.
This also brings the point of efficient use of your infra, if you create different VM or machine for each service, there might be case that you don't use that infra to its full capacity but to achieve isolation you had it separate. With containers these boundaries are not hard, they still can use CPU and RAM efficiently.
Networking between them and security is also one challenge, but that is overcome by creating sub internal networks which are exposed to only the relevant containers only and not to outer world, you can also copy the set of containers for different tenants in different network to achieve multi tenancy.
But with so many of services, deployable units there comes the overhead of their isolation, maintenance, efficient use of CPU and Memory, deployments, networking..... and a lot of things more related to infrastructure, devops and sysops.
Docker is the technology to create containers and how containers help us in managing all these complexities is what I am going to talk about here.
With docker image, all the required OS level setup is packed along with a thin OS layer and service.
First complexity is deployment. With containers, deployment is just one command. Development team builds an image, test it locally and push it to registry. After that its just one line of command "docker run" or "docker pull" to setup a new environment, not only that if you have an existing environment and want to update it with latest version its same additionally you just have to run "docker stop" to switch off existing version.
Another complexity is managing different environments, that is solved by dynamically creating and changing environment. You run above commands on baremetal machines or VMs and you get your environment up. Same infrastructure can be used to create different environments. This is why it is extremely useful for product based companies for support. They can deploy any prebuilt image with required underlying OS and configuration and check for bugs and its patches. No need to recreate environment again and again.
Next is Isolation, your services are all individual and if you patch one service, you don't want to modify, restart and test other services. You can achieve this by just updating specific container. Though you can achieve the same with other ways like having a separate process for each service but that still brings you to having different VMs or machines for isolation so that any change at infra level does not impact other.
This also brings the point of efficient use of your infra, if you create different VM or machine for each service, there might be case that you don't use that infra to its full capacity but to achieve isolation you had it separate. With containers these boundaries are not hard, they still can use CPU and RAM efficiently.
Networking between them and security is also one challenge, but that is overcome by creating sub internal networks which are exposed to only the relevant containers only and not to outer world, you can also copy the set of containers for different tenants in different network to achieve multi tenancy.
Comments
Post a Comment