In this post, I am going to share some quick tips, Q & As and useful links from our first day of OpenShift Training. We covered the basics of Docker Container, Kubernetes Architecture & OpenShift Overview.
So, here are some of the Q & As asked during the Live session.
Q1. Hello, the labs that we will do, will they be worked on OKD or OCP?
Ans: We have two kinds of deployments available on OpenShift; Origin Kubernetes Deployment (OKD) & Openshift Container Platform (OCP).
The initial set of labs are performed on OKD which is a distribution of K8s optimized for continuous application development and multi-tenant deployment.
Later we shift to the latest version of OCP i.e. 4.8.11 which is one of the most common deployments of Openshift that uses the default native cluster method with a server dedicated for that, three masters and 2 or more infrastructure nodes where the cluster routers are deployed.
➤ Watch and learn how to Install Single Node OpenShift Cluster (OKD): Step By Step
Q2. Will we use any other containerizing tool other than Docker? since the last versions of OCP no longer use Docker by default.
Ans: Yes, we will be using other alternatives like Podman, cri-o, Buildah.
➤ Check out more on Docker Alternatives
Q3. For Legacy or Monolithic companies, should they migrate their apps to microservices? is there a particular strategy for them? or those new projects should go with an updated architecture standard?
Ans: The best approach to moving your monolith is in pieces. A typical process to migrate from a monolithic system to a microservices-based system involves the following steps:
- Identify logical components.
- Flatten and refactor components.
- Identify component dependencies.
- Identify component groups.
- Create an API for the remote user interface.
- Migrate component groups to macroservices (move component groups to separate projects and make separate deployments).
- Migrate macroservices to microservices.
- Repeat steps 6-7 until complete.
➤ Read more about the Monolithic vs Microservices
Q4. What is the difference between the Docker swarm and Kubernetes?
Ans: Kubernetes focuses on open-source and modular orchestration, offering an efficient container orchestration solution for high-demand applications with complex configurations.
Docker Swarm emphasizes ease of use, making it most suitable for simple applications that are quick to deploy and easy to manage.
➤ Find out more on Docker Swarm vs. Kubernetes
Q5. If we create an image from scratch, are there any considerations we need to keep in mind?
Ans: Creating Docker images from scratch is an expert task that requires advanced Linux skills. A detailed description is beyond the scope of this answer.
The following steps are intended to convey the general idea of how to proceed:
- Set up a chroot-like branch of your file system by copying all system files that are required by the new image into that branch.
- Install any required packages to the new branch.
- Remove all superfluous content.
- Create a .tar file that contains the file system branch.
- Import the .tar file into Docker.
➤ Know everything about Container Images
Q6. What’s the most used or recommended Network Plugin on Docker?
Ans: Bridge Network, uses a software bridge that allows containers connected to the same bridge network to communicate while providing isolation from containers that are not connected to that bridge network.
➤ Know more on Docker Networking
Q7. At a storage level in Docker, working with NFS, is it recommended in Production?
Ans: The right storage driver to use really depends on your use case, how much up-front work you’re willing to put into configuring it, how much risk you can afford to take, and how much work you’re willing to put into supervising it and ensuring that nothing wacky happens once implemented.
➤ Read about the different Docker Storage Comparisons
Q8. What happens if the Master Node dies? Will the rest of the nodes keep working?
Ans: Master runs the API and manages the underlying cloud infrastructure. When it is offline, the API will be offline, so the cluster ceases to be a cluster and is instead a bunch of ad-hoc nodes for this period. The cluster will not be able to respond to node failures, create new resources, move pods to new nodes, etc. Until the master is back online.
However, in any case, life for applications will continue as normal unless nodes are rebooted, or there is a dramatic failure of some sort during this time, because of TCP/ UDP services, load balancers, DNS, the dashboard, etc. Should all continue to function.
Hence, the best practice is to assign multiple (odd number) masters to your cluster.
➤ Check out High Availability in Kubernetes
Q9. How will we do the labs? On-prem environment, Vm or cloud?
Ans: We will be using a cloud environment, as it is the best way to practice the labs. Because it is free, has faster execution and provides easier debugging.
➤ Check out how to create a Kubernetes Cluster on Azure, AWS and Oracle Cloud.
Q10. In that architecture, where does the storage fall in?
Ans: Kubernetes stores the file in a database called the etcd. Besides storing the cluster state, etcd is also used to store the configuration details such as the subnets and the configmaps.
➤ Know in-depth about the Kubernetes Architecture and also about the ETCD Backup & Restore in Kubernetes
Q11. Through which node will our applications be published to our clients?
Ans: Even though we can use master nodes to run our applications the best practice is to constrain the workloads to Worker nodes.
Q12. Is etcd integrated with the masters or workers?
Ans: The etcd is integrated with the master node. The ‘d’ in etcd augmented to “etc” to represent etcd’s distributed model.
➤ Check out the blog on Kubernetes Architecture to know more.
Q13. Up to how many containers can we have in a pod?
Ans: There is no such pod-specific limit. Though there are some cluster-specific criteria (k8s v1.21) like:
- No more than 110 pods per node
- No more than 5000 nodes
- No more than 150000 total pods
- No more than 300000 total containers
The usual best practice is to run one container in a pod, with additional containers only for things like an Istio network-proxy sidecar.
➤ Know everything about the Multicontainer Pods
Q14. What are the ambassador and sidecar?
Ans: Sidecar is an additional container that extends the functionality of the main container. An example given everywhere is that you’d like to send logs to some external system. Without changing the business logic (the main container), you can deploy a logging agent as a sidecar container.
Ambassador is a container that is a proxy to other parts of the system. A good example is that you deploy an ambassador container that has credentials to Kubernetes API, so you don’t have to use authentication from your client.
➤ Check out more on Ambassador and Sidecar patterns.
Q15. How can we do the sizing to a pod to apply to limit ranges?
Ans: Check out the steps below:
Step 1: Create a namespace
Step 2: Apply a limit to the namespace
Step 3: Enforcing limits at point of creation
Supported Resources:
- CPU
- Memory
Supported Constraints:
Across all containers in a pod, the following must hold true:
Constraint |
Enforced Behavior |
Min |
Min[resource] less than or equal to container.resources.requests[resource] (required) less than or equal to container.resources.limits[resource] (optional) |
Max |
container.resources.limits[resource] (required) less than or equal to Max[resource] |
MaxLimitRequestRatio |
MaxLimitRequestRatio[resource] less than or equal to ( container.resources.limits[resource] / container.resources.requests[resource]) |
Q16. What is REST API?
Ans: A REST API (also known as RESTful API) is an application programming interface (API or web API) that conforms to the constraints of REST architectural style and allows for interaction with RESTful web services. REST stands for representational state transfer and was created by computer scientist Roy Fielding.
Q17. What is StatefulSet?
Ans: StatefulSet is the workload API object used to manage stateful applications. Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods.
➤ Read more about StatefulSets and you can also watch the video.
Q18. Can each container have the IP?
Ans: By default, the container is assigned an IP address for every Docker network it connects to.
➤ Know more about Networking in Containers
Q19. Kubernetes and OpenShift, which one has more jobs among these two?
Ans: OpenShift is the enterprise version of K8s. A lot of jobs are available in both verticals. And, as they are related you can apply for both the jobs as per the job description.
Q20. Who manages Docker containers?
Ans: The Docker daemon ( dockerd ) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes.
➤ Check out the Docker Architecture blog to know more
Q21. D0180- Contatiner&K8s. Can u please tell me what is D0280 about?
Ans: D0180 and D0280 are Red Hat OpenShift Certifications, both part of our comprehensive OpenShift training.
Q22. Can the DevOps program be upgraded to DevSecOps so that the program can be much more rounded?
Ans: Security has a prominent role to play in every step of DevOps. Although our OpenShift training does cover the major topics, we highly recommend you to go through our Certified Kubernetes Security Specialist.
➤ Register for CKS free class
Q23. Is that certification course fee covered as part of course fees? and would this be covered by RedHat?
Ans: No. You have to book the exam from Red Hat.
➤ Know more about the Red Hat OpenShift Certification
Q24. Can we run a container in port1 in VM1 and Port2 in VM2? How do we have a common endpoint URL for clients?
Ans: That is where we need K8s and OpenShift.
➤ Watch our video on Kubernetes Services
Q25. Which version of OpenShift is covered as part of the course?
Ans: The latest version i.e. 4.8.11
Q26. What is OKD?
Ans: OKD is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams. OKD is a sibling Kubernetes distribution to Red Hat OpenShift.
➤ Check out how to create your own OKD Cluster
Q27. What is the role of Infra nodes – What is the minimum number of infra nodes required. What software components go into Infra node(s)?
Ans: Infrastructure nodes allow customers to isolate infrastructure workloads for two primary purposes:
- to prevent incurring billing costs against subscription counts and
- to separate maintenance and management.
Q28. Does OpenShift allow to enter to OS by namespace enter?
Ans: From CLI.
Q29. Like we have Kubernetes documentation, what is in Red Hat OpenShift similar? Ex: kubernetes.io
Ans: Yes, the link is https://docs.openshift.com/
Q30. Does OpenShift provide dashboards for diverse data sources, create live presentations to highlight KPIs, and manage your deployment in a single UI similar to what Kibana does?
Ans: Yes, more than that the dashboards like Prometheus, Grafana, Thanos and much more are integrated with OpenShift.
➤ Check out how to set up Prometheus and Grafana for your Kubernetes Cluster
Q31. Is microservices used to host Databases also? My concern is that for VLDB its difficult to estimate the resource and block it for peak loads. Which DBs are mostly hosted on Docker/Kubernetes?
Ans: Yes, we can. Here are some of the best practices:
- Auto-Provisioning – Automatically provision different environments using code which is VCed/GIT.
- Auto-Redundancy – Cloud-native apps are highly resilient to failure. When an issue occurs, apps move to another server or VM automatically and seamlessly.
- Auto-Scaling – Increases/Decreases the resources whenever spike in traffic is there. Application design should be done using Microservices.
- API exposure – Expose API using REST or GRPC.
- Enabled Testing
- Enable Farewell and Services mesh.
- Utilize Multi-Cloud Deployment.
- Setup Continuous Integration/Continuous Delivery.
- Safe Data in Transit.
- Data Access Control for a Database in Cloud.
- Keep Data in Multiple Regions and Zone.
Some of the Cloud-Native Databases Tools are:
- PostgreSQL
- MongoDB
- Apache Tomcat
- Couchbase
- CockroachDB
- CrateDB
Q32. Do you think the container will be cost-effective compare to Azure VM?
Ans: A lot depends on the different factors too, so you can say that it all depends on what you want to run.
Q33. At azure what is represent container or it is the same?
Ans: It is the same way to do it on on-premises or Azure VM.
Q34. OpenShift is only for Linux or is it available for windows as well?
Ans: Mostly on Linux but one node cluster is available on Windows.
Q35. Can we automate resource management in container management?
Ans: Yes.
➤ Read more about Container Management
Q36. Which is more popular Kubernetes or OpenShift?
Ans: OpenShift is the Enterprise version of K8s, to choose one it all depends on the requirements. Both are popular, and the adaptation of both is only increasing at a brisk pace.
➤ Check out our Openshift vs Kubernetes blog to know more
Q37. Can VM machines also use the containers?
Ans: Absolutely, you can install the container engine on VMs as well.
➤ Check out how to Install Docker Container Engine
Q38. Is docker part of both Kubernetes and OpenShift?
Ans: Docker is just a type of container and Kubernetes is deprecating Docker. , container engine named Podman and K8s is the main platform on which OpenShift runs.
Q39. Can we do role management/authorization in ocp who can access the app inside a container?
Ans: Yes. Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects.
Q40. Is namespace part of the same physical/vm machine or different?
Ans: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called Namespaces.
Q41. What is a probe?
Ans: Kubernetes provides probes health checks to monitor and act on the state or condition of the pods.
There are two types of probs:
Liveness probes: Kubernetes uses liveness probes to know when to restart a container
Readiness probes: Kubernetes uses readiness probes to decide when the container is available for accepting traffic.
➤ Know in-depth about the K8s Probes and Liveness vs Readiness
Related/References
- Red Hat Certified Specialist [EX280]
- OpenShift for Beginners
- Docker & Certified Kubernetes Administrator (CKA) Training
Next Task for You
Begin your journey towards becoming a Red Hat Certified Specialist in OpenShift Administrator and earning a lot more in 2021 by joining our Free Class.
Leave a Reply