This post is the fourth video of our five-part video series on “Docker & Kubernetes”.
In this video blog, we are going to cover the High Availability in Kubernetes, Deployment, Load balancer, Service, and also we are discussing how to set up a highly scalable application in Kubernetes.
Note: If you have missed my previous post on “Kubernetes Architecture | Kubernetes Components”, to check previous post click here
High Availability In Kubernetes
If you have multiple applications running on Single containers that container can easily fail. Same as the virtual machines for high availability in Kubernetes we can run multiple replicas of containers. In Kubernetes, to manage the multiple replicas we use deployment this is a type of controller.
Deployment is a type of controller that is used to manage multiple replicas. By using the deployment we can Scale-up and Scale-down the replicas. also, we can define Deployments to create new ReplicaSets or to remove existing Deployments and adopt all their resources with new Deployments.
Kubernetes services enable communication between various components within and outside of the application Kubernetes. Services help us connect applications together with other applications. Services provide a single IP address and DNS name by which pods can be accessed.
Load balancing is efficient in distributing incoming network traffic across a group of backend servers. A load balancer is a device that distributes network or application traffic across a cluster of servers.
Setup Scalable Application
In this HA cluster, we are exposing the container and there is a mesh network inside, even your container is running in worker node one. If you reach on the worker node two which you have exposed your packet will be routed over there. It would get routed through the correct destination very early.
In Kubernetes, we can perform load balancing across containers the same as we perform in different virtual machines. In Kubernetes load balancing can happen if you are manually deleting a pod or a pod got deleted accidentally or restarted. The deployment will make sure that it brings back that because Kubernetes has a feature to auto-heal the pods.
If you are creating a new pod it will assign a new IP address to the pod. And when it came with a new IP address still then the reachability to that particular pod has not changed because every time service IP address was constant and I sent back it according to the service IP address and in the back end service keep monitoring Rather the pod went up or went down, it maintains the IP address and the endpoint list.
- [Part 1] Docker vs Virtual Machine | Physical vs Virtual Servers
- [Part 2] Docker Architecture | Docker Engine Components | Container Lifecycle
- [Part 3] Kubernetes Architecture and Components | Kubernetes Nodes | Managed Kubernetes Service
- Services, Load Balancing, and Networking Documentation
- Certified Kubernetes Administrator (CKA) Certification Exam: Everything You Must Know
- Certified Kubernetes Administrator (CKA) Certification: Step By Step Activity Guides/Hands-On Lab Exercise
Join FREE Masterclass
To know about what is the difference between Kubernetes vs Docker and Virtual machine vs Container, why you should learn Docker and Kubernetes, Job opportunities for Kubernetes administrator in the market, and what to study Including Hands-On labs you must perform to clear Certified Kubernetes Administrator (CKA) certification exam by registering for our FREE Masterclass.