Google Kubernetes Engine (GKE) offers a managed environment for deploying, managing, and scaling containerized applications using Google infrastructure.
This blog post covers the following topics:
Need For Kubernetes
These days there’s been a paradigm shift in the way computing tasks are done. The technical industry, on the whole, is moving from conventional to cloud computing. Cloud computing solves a lot of problems like scalability and ease of access, but there are a few challenges that come with it. For instance, there’s a huge variety of hardware and software at work. Often, people would be using the same physical resources to run a variety of different programs or software, sometimes even on different operating systems. This causes inconsistencies in the work. There might be version inconsistencies or issues due to Operating System dependency.
Containers
The solution to this is using “containers“, which make the Virtual Machine or the programs independent of the underlying Operating System. Containers are deployed upon compatible Operating systems, and we can even have multiple containers.
Containers essentially are a simple way to deploy and use cloud-based services. However, in practical application, one may end up with many many containers – and managing them manually might get too taxing. Deploying, managing, connecting, and updating those many containers would need a separate department or a dedicated team- which would make the process inefficient. Hence we need a management system, or “Container Schedulers”, to do all these tasks. One such system is Kubernetes.
Read more about Containers in our blog at Containers For Beginners.
What Is Kubernetes?
Kubernetes is a portable, extensible, open-source platform for managing containers. Developed by Google, Kubernetes (also written as K8s) takes the whole group of computers and makes them work as a single unit. Using Kubernetes ensures that multiple people can use containerized applications, and be able to work together – even if they’re on different platforms.
To know more about Kubernetes read our blog at Kubernetes For Beginners.
Why Choose Google Kubernetes Engine?
The next step in the process is choosing the right platform to manage the container workload. Most of the big cloud vendors like Microsoft (Azure Container Service), Amazon (Amazon Kubernetes Services), and Google itself (Google Kubernetes Engine) use Kubernetes- as discussed above.
One of the big reasons for this is the great amount of flexibility offered. Kubernetes is designed in a way that it can be used anywhere and deployed in either public, private or hybrid clouds. This gives the companies a greater reach, with a higher degree of security, availability, and reliability. The advantages Kubernetes offers can be summarised as follows :
- Open Source
- Increased Productivity
- Multi-Cloud Capability
- Portability & Flexibility
Finally, we move on to choosing Google Kubernetes Engine for managing our containerized applications. The reasons for that are:
- The first and probably most obvious point here is the team at the backend. As Kubernetes and GKE are both made by Google, GKE offers seamless integration with a number of Google services.
- Secondly, any new feature or tool that is released will come to GKE before coming to any other vendor.
- Thirdly, GKE offers auto-scaling of nodes – something that its Amazon and Microsoft counterparts lack.
- Last but not least, Google Kubernetes Engine is also the cheapest managed Kubernetes Service among the top 3 vendors. All these points make it easy to see why GKE would be the first choice for anyone looking to work with containers.
Salient Features Of GKE
Some of the key features of Kubernetes Engine include:
- Pod and Cluster Autoscaling: Based on the particular user’s CPU utilization and other custom metrics, Google provides horizontal pod autoscaling (adjusting the number of machines based on requirements) and based on CPU and memory usage, vertical pod autoscaling (varying the power of available machines).
- Kubernetes Applications: Google provides pre-built applications that are enterprise-ready, with included licensing, billing, and portability. Such applications increase the productivity of the user, as their work is now cut out.
- Integrated logging and monitoring: GKE offers Cloud Logging and Cloud Monitoring with simple checkbox configurations which makes it easier to gain insight into how an application is running.
- Fully Managed: GKE clusters are fully managed by Google Site Reliability Engineers (SREs), ensuring that the cluster is available and up-to-date.
Read more about Google Kubernetes Engine Features.
Google Kubernetes Engine: Workloads & Mode Of Operation
As we all know that GKE works with containerized applications. These are applications packaged into platform-independent, isolated user-space instances, for example by using Docker. In GKE and Kubernetes, these containers, whether for applications or batch jobs, are collectively called workloads. Before deploying a workload on a GKE cluster, users must first package the workload into a container.
When one creates a cluster on Google Kubernetes Engine, they can choose from two modes of operation, described below.
1.) Standard mode
This is the original mode of operation that came out with GKE and is still used today. Here the user gets node configuration flexibility and full control over managing the clusters and node infrastructure. It is best suited for those looking to have full control over every little aspect of their GKE experience.
2.) Autopilot mode
In this mode, the entire managing of node and cluster infrastructure is done from Google’s side, providing a more hands-off approach. However, it comes with some restrictions that one needs to keep in mind as well like, the choice in Operating System is currently limited to just two, and most features are available only via the CLI.
Also Check Our blog post on Google Certified Professional Cloud Architect. Click here
GKE Architecture
Here we look at the underlying architecture of Google Kubernetes Engine and focus on a few important components that facilitate its smooth functioning.
Control Plane
The control plane is responsible for running a number of processes, like the Kubernetes API server, scheduler, and core resource controllers. The Control Plane is directly controlled by GKE based on the created cluster.
Clusters
We already went through what clusters are – a group of machines working together. All machines in the cluster are then made to function together by the Kubernetes engine.
Nodes
A cluster can have single or multiple nodes in it, which are machines that work to run the applications of the containers. The job of the node is to run the necessary services to support a particular cluster’s containers. A group of nodes in the same cluster exist as a Node Pool, and all have the same configuration.
Pods
Pods are defined as the smallest deployable unit of computing that can be managed by Kubernetes. As shown in the figure, a cluster can contain multiple pods – related or unrelated, and grouped under logical borders. A pod may contain one or more containers, which then run the required applications.
Read more about Pods in our blog at Kubernetes Pods For Beginners.
Containers
Containers are an approach to operating system virtualization and come under SaaS – or Software as a Service. A single container can be employed to run a program of any size as needed, from a microservice to larger applications, in a closed-off environment. GKE does the distribution and scheduling of containers across clusters dynamically, so as to keep the efficiency high.
Applications Of Kubernetes
Now that we’ve been through the working of Kubernetes, let’s take a look at what it actually does. The Kubernetes Engine can be used for the following purposes :
- Creating or resizing clusters of containers
- Making controller pods, jobs, services, or Load Balancers
- Updating and upgrading container clusters
- Debugging of cluster containers
Check Out: How to Create Google Cloud Function. Click here
GKE Use-Cases
1.) Continuous Delivery Pipeline
GKE enables rapid application development and iteration by making it easy to deploy, update, and manage applications and services. Users can configure GKE, Cloud Source Repositories, Cloud Build, and Spinnaker for Google Cloud services to automatically build, test, and deploy an application. When the app code is modified, the changes trigger the continuous delivery pipeline to automatically rebuild, retest, and redeploy the new version.
2.) Migrate a 2-tier application to GKE
Users can use Migrate for Anthos to move and convert workloads directly into containers in GKE. For example: Migrate a two-tiered LAMP stack application, with both application and database VMs, from a VMware to Google Kubernetes Engine. Customers can improve security by making the database accessible from the application container only and not from outside the cluster. Replace SSH access with authenticated shell access through kubectl.
Kubernetes Engine Security
Google Kubernetes Engine provides users with many ways to help secure their workloads. Protecting workloads in GKE involves many layers of the stack, including the contents of your container image, the container runtime, the cluster network, and access to the cluster API server.
It’s recommended to take a layered approach to protect clusters and workloads. Users can apply the principle of least privilege to the level of access provided to their customers and their application. In each layer, there may be different tradeoffs that must be made to allow the right level of flexibility and security for organizations to deploy and maintain their workloads in a secured environment.
Read more about Security in GKE.
Kubernetes Engine Storage
There are several storage options for applications running on Google Kubernetes Engine. The choices vary in terms of flexibility and ease of use. GCP offers several storage solutions that are specialized for different needs. And apart from this, Kubernetes provides storage abstractions that the users can use to offer storage to their clusters.
- Cloud Storage can be used for object storage and this object storage can be used for Container registry. Private Docker container images can be stored in Container Registry.
- Filestore can be used if the application requires managed Network Attached Storage.
- Kubernetes storage abstractions provide filesystem and block-based storage to Pods. They are not used with managed databases or Cloud Storage.
Kubernetes Engine Networking
Kubernetes allows users to declaratively define how their applications are deployed, communicate with each other and with the Kubernetes control plane, and how clients can reach their applications.
The Kubernetes networking model relies heavily on IP addresses. Services, Pods, containers, and nodes communicate using IP addresses and ports. Kubernetes provides different types of load balancing to direct traffic to the correct Pods.
Read more about Networking in GKE.
Frequently Asked Questions
Is Google Kubernetes Engine free?
GKE has a free tier, which has limited resources and options available. Further features are available on a pay-per-use basis.
Do I need to have prior knowledge to run GKE?
Thanks to the new Autopilot feature, users don't need to micro-manage everything about their applications, as Google would do this for them from their side
Is Kubernetes a container?
No, Kubernetes is in fact a way to manage all the application containers that an individual or an organization might have.
Related References
- GCP Associate Cloud Engineer: All You Need To Know About
- GCP Professional Cloud Architect: Everything You Need To Know
- Google Cloud Free Account: Steps to Register for Free-trial Account
- Introduction To Google Cloud Platform
- Google Cloud Services & Tools
- Introduction To Google Compute Engine
Next Task For You
If you are also interested and want to know more about the Google Professional Cloud Architect certification then register for our Free Class.
Leave a Reply