This blog post covers the issue and fix that very few of us encountered while performing the kubeadm init command. We often encounter the [Error CRI]: container runtime is not running during kubeadm init.
The complete error may look like the following, along with a warning sometimes.
$ kubeadm init [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR CRI]: container runtime is not running Status from runtime service failed” err=”rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService”
[ERROR CRI]: container runtime is not running [Issue Encountered]
This is a common issue when you run the kubeadm init command while the CRI used is Containerd. In most cases, the issue is with the config.toml file.
Fix the Error
To fix the error you can delete the config.toml file and restart containerd then try the init command like below:
$ rm /etc/containerd/config.toml $ systemctl restart containerd $ kubeadm init
For the [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly, It seems that firewalld, a dynamic daemon to manage firewalls, is active on your system. This could potentially block communication between the nodes in your Kubernetes cluster. To resolve this issue, you need to ensure that the necessary ports are open in the firewall.
Also Check: Our previous blog post on How To Setup A Multi-Node Kubernetes Cluster on SUSE Linux
To enable the fulfillment of their tasks, nodes, containers, and pods must communicate across the cluster. Please use the following commands to open the listed ports.
Enter the following commands on the Master Node:
$ sudo firewall-cmd --permanent --add-port=6443/tcp $ sudo firewall-cmd --permanent --add-port=2379-2380/tcp $ sudo firewall-cmd --permanent --add-port=10250/tcp $ sudo firewall-cmd --permanent --add-port=10251/tcp $ sudo firewall-cmd --permanent --add-port=10252/tcp $ sudo firewall-cmd --permanent --add-port=10255/tcp $ sudo firewall-cmd –reload
Each time a port is added the system confirms with a Success message.
Enter the following commands on each worker node:
$ sudo firewall-cmd --permanent --add-port=10251/tcp $ sudo firewall-cmd --permanent --add-port=10255/tcp $ firewall-cmd --reload
You can also check a discussion thread on GitHub form here
To Download Kubernetes CKA Sample Exam Questions, Click here.
Frequently Asked Questions
What is kubeadm init?
kubeadm init is a command used to bootstrap a Kubernetes cluster. It initializes a master node and sets up the Kubernetes control plane components.
How do I use kubeadm init to set up a Kubernetes cluster?
You can use the kubeadm init command along with various flags to customize your Kubernetes cluster's configuration. For instance, you can specify the Pod network CIDR, the API server advertise address, or the token for joining nodes.
What is containerd?
containerd is an industry-standard core container runtime. It provides a reliable and high-performance runtime with an emphasis on simplicity, robustness, and portability.
How is containerd used in Kubernetes?
containerd is one of the container runtimes that can be used with Kubernetes. Kubernetes can be configured to use containerd as the container runtime instead of Docker. This allows for better integration with the Kubernetes ecosystem and provides more flexibility in managing containers.
Related Post
- [Solved] The connection to the server localhost:8080 was refused – did you specify the right host or port?
- [Solved] Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
- How To Setup A Three Node Kubernetes Cluster For CKA: Step By Step
- Certified Kubernetes Administrator (CKA): Step-by-Step Activity Guide (Hands-on Lab)
- CKA Certification Exam (Certified Kubernetes Administrator)
- Kubernetes for Beginners – A Complete Beginners Guide
- Kubernetes Dashboard: An Overview, Installation, and Accessing
- CKA/CKAD Exam Questions & Answers 2022
- Docker Container Lifecycle Management: Create, Run, Pause, Stop And Delete
- CKA vs CKAD vs CKS – Differences & Which Exam is Best For You?
- Etcd Backup And Restore In Kubernetes: Step By Step
Join FREE Class
Discover the Power of Kubernetes, Docker & DevOps – Join Our Free Masterclass. Unlock the secrets of Kubernetes, Docker, and DevOps in our exclusive, no-cost masterclass. Take the first step towards building highly sought-after skills and securing lucrative job opportunities. Click on the below image to Register Our FREE Masterclass Now!
Thomas K says
Since I use SystemdCgroup = true in my config, I also do this
containerd config default | tee /etc/containerd/config.toml
sed -e ‘s/SystemdCgroup = false/SystemdCgroup = true/g’ -i /etc/containerd/config.toml
systemctl restart containerd
Rahul Dangayach says
Hi Thomas,
It seems that you are modifying the containerd configuration file to enable SystemdCgroup and restarting the containerd service. This configuration change will allow containerd to use SystemdCgroup as the default cgroup driver.
SystemdCgroup is a cgroup driver that is used by Systemd to manage system processes and their resource usage. When SystemdCgroup is enabled in containerd, it allows containerd to utilize the same cgroup hierarchy as the host system and ensures that container processes are properly managed and isolated.
Here are the steps you are taking:
1. Running the command containerd config default to get the default configuration file for containerd.
2. Piping the output of the previous command to the “tee“ command, which writes the output to the specified file and also prints it to the console.
3. Using the sed command to modify the <em>SystemdCgroup configuration parameter from false to true in the containerd configuration file. The -i option tells sed to modify the file in place.
4. Restarting the containerd service using the systemctl restart containerd command to apply the configuration changes.
After completing these steps, containerd will be configured to use SystemdCgroup as the default cgroup driver, which should improve the reliability and performance of container management on your system.
Thanks and Regards
Rahul Dangayach
Team K21Academy
Vinod says
Its working great thanks
Rahul Dangayach says
Hi Vinod,
We are Glad you liked our blog and that it helped you.
Please stay tuned for more informative blogs like these.
Thanks and Regards
Rahul Dangayach
Team K21 Academy
Lucky says
Worked for me.
Thanks 🙂
Rahul Dangayach says
Hi Lucky,
We are Glad you liked our blog and that it helped you.
Please stay tuned for more informative blogs like these.
Thanks and Regards
Rahul Dangayach
Team K21 Academy
KubeAdmin says
This was super useful to setup v1.27.4 bare-metal cluster. Thanks.
vincent says
Hi, running the command to remove rm /etc/containerd/config.toml and restart containerd doesnt work, and also tried containerd config default | tee /etc/containerd/config.toml
sed -e ‘s/SystemdCgroup = false/SystemdCgroup = true/g’ -i /etc/containerd/config.toml
this too doesnt work, it still prompting>
I0905 06:34:07.639074 27277 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time=”2023-09-05T06:34:07-04:00″ level=fatal msg=”validate service connection: CRI v1 runtime API is not implemented for endpoint \”unix:///var/run/containerd/containerd.sock\”: rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService”
, error: exit status 1
any idea what else can be tried ?
Vaishnavi Patil says
Hey! I’m facing the same issue. Were you able to figure out something?