This blog post covers a brief overview of the topics covered and some common questions asked on Day 7 Live Interactive training on Docker and Kubernetes Certification i.e. CKA / CKAD and CKS.
This will help you to learn Docker & Kubernetes and prepare you for these certifications and get a better-paid job in the field of Microservices, Containers and Kubernetes.
In the Day 6 CKA Live session we covered an overview of Docker Compose, Docker Swarm, Docker service. And in this week, Day 7, we covered Docker secrets, Docker Config, Docker Placement Constraints and Docker EE. We also performed labs.
Docker Secrets
Docker secrets are designated to create for storing the sensitive information like username, password, SSL certificates, and any secure files. Docker Secret is created and used widely in Docker Swarm and then extended to docker compose from v3. Just imagine, we never want to store a configuration file with all of our passwords on our GitHub/any repository even in public or private. In this guide we will walk you through various aspects of setting up a using Docker secrets.
Before we head into steps, here is some introduction how it is used on docker swarm services. First, we create & add a secret to the swarm, and then we give our services access to the secrets they require. When the service is created (or updated), the secret is then mounted onto the container in the /run/secrets directory. Now your application has access to the secrets it requires.
To read more about Docker Swarm, click here
Q&A’s asked in the session are:
Q) How to Create Secrets
Ans:
Create secret from stdin:
Just assume with your swarm already running or for test, please run “docker swarm init”, this initiate your docker swarm on the node. Now, you can use the docker secret create command to add a new secret to the swarm. Here is a basic example:
echo "mypassword" | docker secret create mypass -
Create from file: Let’s say you have file with a password. For example, db_pass.txt
has the following content: Now you can create a new secret from that file:
Now let’s use the “docker secret ls” command to confirm that our secret was added:
docker secret ls
should output something like this:
ID NAME CREATED UPDATED rkxav7s9rvnc9d7ct6dhkrsyn mypass 3 minutes ago 3 minutes ago
Q) How to inspect Docker Secret
Ans: You can use inspect command on Docker secret also, same as other docker commands
docker secret inspect secret_name
our case it will be “mypass”
Q) How to remove Docker Secret:
Ans: You can remove the docker secret using following command
docker secret rm secret_name
Q) How to add secret to Service
Ans: Now that you’ve added the secret to the swarm, you need to give the service access. This can be accomplished when the Docker service is created or updated. Here’s an example of how to add a secret when the service is created:
docker service create --secret mypass --name secret alpine ping foxutech.com
In this example, I’m adding the mypass secret we created in the previous step to a service running the alpine image. If we’ve already got a service running and we want to add (or change) a secret, you can use the –secret-add option.
docker service update --secret-add mypass existing_service_name
Q) How to Use a Secret in docker-compose.yml:
Ans: Let’s say we initialize two secrets for a database. Here we have two secrets psql_user
and psql_password
which are created from the files with the same name. This technique does not require the initial setup with docker secret create
Please remember that storing the “secret” files as plain text files on your production machine is not secure. You’ll have to find a way to hide the text files.
Here we use theexternal
keyword to show that we created the secrets before using the docker-compose.yml
file. You can use external secrets when Docker is in swarm mode (docker swarm init
). For a local setup, we might want to use the file version
version: '3' secrets: db_user: file: ./my_db_user.txt db_password: file: ./my_db_pass.txt
Now you have to tell each service which secrets it is allowed to use.
version: '3' secrets: db_user: external: true db_password: external: true services: postgres_db: image: postgres secrets: - db_user - db_password
The postgres_db
service can now access the db_user
and db_password
secrets.
To read more about Docker compose, click here
Docker Config
Docker swarm service configs allow you to store non-sensitive information, such as configuration files, outside a service’s image or running containers. This allows you to keep your images as generic as possible, without the need to bind-mount configuration files into the containers or use environment variables.
Configs operate in a similar way to secrets, except that they are not encrypted at rest and are mounted directly into the container’s filesystem without the use of RAM disks. Configs can be added or removed from a service at any time, and services can share a config. You can even use configs in conjunction with environment variables or labels, for maximum flexibility. Config values can be generic strings or binary content (up to 500 kb in size).
Q&A’s asked in the session are:
Q) How to embed configuration in an Image
Ans: We often see Dockerfile like the following one, where a new image is created only to add a configuration to a base image.
$ cat Dockerfile FROM nginx:1.13.6 COPY nginx.conf /etc/nginx/nginx.conf
In this example, the local nginx.conf
configuration file is copied over to the NGINX image’s filesystem in order to overwrite the default configuration file, the one shipped in /etc/nginx/nginx.conf
One of the main drawbacks of this approach is that the image needs to be rebuilt if the configuration changes.
To read more about Dockerfile, click here
Q) Explain Docker config by an example
Ans:
Add a config to Docker: The docker config create
command reads standard input because the last argument, which represents the file to read the config from, is set to
$ echo "This is a config" | docker config create my-config -
Create a redis
service and grant it access to the config. By default, the container can access the config at /my-config
, but you can customize the file name on the container using the target
option.
$ docker service create --name redis --config my-config redis:alpine
Verify that the task is running without issues using docker service ps
If everything is working, the output looks similar to this:
Docker Placement Constraints
Placement constraint limits where a task can run (a task will not run on a node unless it satisfies the constraint, it will remain in pending state). Swarm services provide a few different ways for you to control scale and placement of services on different nodes. Placement constraints let you configure the service to run only on nodes with specific (arbitrary) metadata set, and cause the deployment to fail if appropriate nodes do not exist.
Q&A’s asked in the session are:
Q) What is difference between Docker placement preference vs Docker placement constraints?
Ans: While placement constraints limit the nodes a service can run on, placement preferences try to place tasks on appropriate nodes in an algorithmic way (currently, only spread evenly). Placement preference helps you in distributing the tasks based on a constraint across the nodes.
e.g. placement constraint of node.region=east will let a task only run on nodes labelled “east” whereas, placement preference of node.region=east will help you spread the tasks across nodes evenly based node.region. If any node does not have this label, it will still get the task.
Q) How to use constraints with Swarm mode
Ans: i will explain you how to use constraints to limit the set of nodes where a task can be scheduled. It’s the purpose of a Swarm cluster, all your nodes can accept containers. But sometimes, you need to specify a subset of nodes for some reasons. For example, maybe all your nodes have not the same hardware and some are more powerfull. That’s where constraints appear ! They will let you specify on which nodes your service can be scheduled. Constraints are based on labels.
This cluster have 3 managers and 2 workers. By default, a new service will be scheduled on one of this 5 nodes.
1. Docker’s defaults constraints By default, nodes already have labels. You can use these labels to restrict scheduling on your service :
If you specify multiples constraints, Docker will find nodes that satisfy every expression (it’s an AND match).
With this example, the new service will be scheduled on docker00 or docker02 (both are managers).
2. Add your own’s labels
With the defaults labels, you can affine scheduling but if you want to be more specific, add your own’s labels. Recently, in my cluster, i have updated docker00 and docker01 with the latest Raspberry Pi 3B+ (the others are Raspberry Pi 3B). So, i have 2 nodes more powerfull (cpu and network) than the others. It could be usefull to schedule containers that need more CPU or network on these nodes.
For this, we need to :
Add a custom label to your nodes (only managers can add labels):
We added the label powerfull : true to the 2 nodes. You can see labels with this command :
Start the service with the new constraint :
Please note that the syntax for your own’s labels is : node.labels.YOUR_LABEL_NAME
3. Delete your own’s labels: Just in case you need it
Mirantis Kubernetes Engine Use Cases
The The Mirantis Kubernetes Engine container platform (formerly Docker Enterprise/UCP) delivers immediate value to your business by reducing the infrastructure and maintenance costs of supporting your existing application portfolio while accelerating your time-to-market for new solutions.
For enterprises that need to manage consistent Kubernetes at scale, Mirantis Container Cloud is the best tool for deploying Mirantis Kubernetes Engine. But for individual Kubernetes Engine clusters, Mirantis Launchpad is a faster solution. Mirantis Kubernetes Engine itself can run acceptably well on medium-sized virtual machines in a home lab (e.g., hosted on VirtualBox), and runs very well on larger VMs on any private or public cloud. To know more about MKE, click here.
Q&A’s asked in the session are:
Q) What’s the use case of Mirantis Kubernetes Engine?
Ans: Mirantis Kubernetes Engine can run almost anywhere: on virtual machines, bare metal, or on any public cloud. Worker nodes can run on a range of Linux operating systems, or on Windows Server. Below are the common use-cases of the same.
- Run securely
- Run Windows-native container workloads
- Specialized hardware support
- Ready for work – batteries included.
- Consistent and Centrally-Manageable
Using Mirantis Container Cloud, consistent Mirantis Kubernetes Engine clusters can be configured, deployed, observed, and lifecycle-managed across your hybrid or multi-cloud. Centralized provisioning, zero-downtime updates, built-in observability, and a single point of integration for self-service and operations automation streamline Ops. Consistent clusters everywhere simplifies CI/CD and help you ship code faster.
Related Post
- Kubernetes for Beginners
- How To Setup A Three Node Kubernetes Cluster For CKA: Step By Step
- Check out and Subscribe to our YouTube channel on “Docker & Kubernetes.”
- Certified Kubernetes Administrator (CKA) Certification Exam
- Certified Kubernetes Administrator (CKA) Certification: Step By Step Activity Guides/Hands-On Lab Exercise
Join FREE Class
Begin your journey towards becoming a Certified Kubernetes Administrator [CKA] from our Certified Kubernetes Administrator (CKA) training program. To know about the Roles and Responsibilities of a Kubernetes administrator, why learn Docker and Kubernetes, Job opportunities for Kubernetes administrator in the market. Also, know about Hands-On labs you must perform to clear the Certified Kubernetes Administrator (CKA) Certification exam by registering for our FREE class.
Leave a Reply