This blog post covers the issue & fix that most of us encounter while configuring the metric server in Kubernetes and after running the kubectl top nodes command.
A scalable, effective source of container resource metrics for the built-in autoscaling pipelines in Kubernetes is Metrics Server.
In order to be used by both the Horizontal Pod Autoscaler and the Vertical Pod Autoscaler, Metrics Server gathers resource metrics from Kubelets and exposes them in the Kubernetes apiserver using Metrics API. Kubectl top has access to the metrics API as well, which makes troubleshooting autoscaling pipelines simpler.
Metrics Server is not intended to be used for non-autoscaling applications. Use it neither as a source of monitoring solution metrics nor to pass measurements to monitoring solutions, for instance. Please directly collect data from the Kubelet /metrics/resource endpoint in certain circumstances.
We often see a kubectl top nodes shows Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io).
Read more about K8s monitoring tools
Issue Encountered – Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io).
When you run any kubectl top nodes commands after installing the metric server, it started to give such an error.
$ kubectl top nodes Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
Cause of Error
There are particular cluster and network configuration needs for Metrics Server. Not all cluster distributions automatically adhere to these specifications.
- The kube-apiserver must enable an aggregation layer.
- Nodes must have Webhook authentication and authorization enabled.
- Kubelet certificate needs to be signed by cluster Certificate Authority (or disable certificate validation by passing –kubelet-insecure-tls to Metrics Server)
- Container runtime must implement a container metrics RPCs (or have cAdvisor support)
- Network should support following communication:
- Control plane to Metrics Server. Control plane node needs to reach Metrics Server’s pod IP and port 10250 (or node IP and custom port if hostNetwork is enabled). Read more about control plane to node communication .
- Metrics Server to Kubelet on all nodes. Metrics server needs to reach node address and Kubelet port. Addresses and ports are configured in Kubelet and published as part of Node object. Addresses in .status.addresses and port in .status.daemonEndpoints.kubeletEndpoint.port field (default 10250). Metrics Server will pick first node address based on the list provided by kubelet-preferred-address-types command line flag (default InternalIP,ExternalIP,Hostname in manifests).
The error we are talking about is due to the 3rd specification mentioned above.
How to Fix the Error
We can fix the error by editing the metric server deployment in kube-system namespace then add – –kubelet-insecure-tls under container specs (as shown in the below screenshot).
Related/References
- Visit our YouTube channel “K21Academy”
- Certified Kubernetes Administrator (CKA) Certification Exam
- (CKA) Certification: Step By Step Activity Guides/Hands-On Lab Exercise & Learning Path
- Certified Kubernetes Application Developer (CKAD) Certification Exam
- (CKAD) Certification: Step By Step Activity Guides/Hands-On Lab Exercise & Learning Path
- Create AKS Cluster: A Complete Step-by-Step Guide
- Container (Docker) vs Virtual Machines (VM): What Is The Difference?
- How To Setup A Three Node Kubernetes Cluster For CKA: Step By Step
Join FREE Masterclass
Discover the Power of Kubernetes, Docker & DevOps – Join Our Free Masterclass. Unlock the secrets of Kubernetes, Docker, and DevOps in our exclusive, no-cost masterclass. Take the first step towards building highly sought-after skills and securing lucrative job opportunities. Click on the below image to Register Our FREE Masterclass Now!
Leave a Reply