The Twelve-Factors Kubernetes

_“Kubernetes is the Linux of the cloud”_This quote by Kelsey Hightower during the Kubecon 2017 in Austin emphasize the rise of Kubernetes among modern cloud infrastructures.

This rise is partly driven by the developers community, but also by the web giants such as Google, Amazon, Alibaba or Red Hat who have invested a lot on this technology, and keep contributing to its improvement and smoothening its integration to their respective ecosystems. EKS for AWS, GKE for Google and AKS for Azure are good illustrations of that.

This article lists the 12 basic rules and good practices to know about when starting using Kubernetes optimally. This list interests anyone, developer or sysadmin, who uses K8s daily.

I. 1 Pod = 1 or n containers

The naming is important to make sure everyone is on the same page about what’s what. In the Kubernetes world, a Pod is the smallest computing unit deployable on a cluster. It’s made of one or more containers. The containers within a Pod share the same IP, the same storage and are co-located on the same node of the cluster.

To go further: https://kubernetes.io/docs/concepts/workloads/pods/pod/

II. Labels everywhere

Most Kubernetes resources can be labelled: the Pods, the Nodes, the Namespaces etc. This labelling is done injecting a key-value pair into the metadata. Labelling components allows for 2 things:

  • A technical use, as many inter-dependant resources use labels to identify one another. For instance, I can label part of my Nodes “zone: front” because these nodes are likely to host web applications. I then assign a affinity to my frontend Pods so that they get hosted by the nodes labelled “zone: front”.
  • An organisational use: assign labels allow to easily identify resources and request them efficiently. For instance, to retrieve all nodes in the front zone, i can run:

@@ARTICLE_CONTENT@@gt; kubectl get nodes -l zone=front

To go further: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/

III. Infrastructure as Code and versioning

All Kubernetes resources can be written as a YAML or JSON files. The resource creation from a file is done with the command line:

@@ARTICLE_CONTENT@@gt; kubectl apply -f MYFILE.{yaml,json}

The apply command does a smart diff, so it only creates the resource if it wasn’t already there, updates it if the file was changed, and does nothing otherwise.

The use of files allows to track, version and reproduce the complete system at any time. It is therefore a commonly adopted practice to to version the _K8s_resource description files with the same rigor as for the code.

To go further: https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#kubectl-apply

IV. A Service to expose

Pods never communicate directly with one another, they got through a Service, because Pods are volatile and short-lived across the cluster. During some maintenance operations, some Pods may migrate from one node to another. These same Pods may also reboot, scale out, or even be destroyed, when upgrading for instance. In each of these cases, the Pod IP changes, as well as its name. The Service is a Kubernetes resource located in front of the Pods and allowing for some ports of the Pods on the network. Services have a fixed name and a fixed dedicated IP. Thus you can access your Pods whatever their IPs or names. The matching between Services and Pods relies on labels. When the Service matches several Pods, it load-balances the traffic with a round-robin algorithm.

To go further: https://kubernetes.io/docs/tutorials/kubernetes-basics/expose-intro/

V. ConfigMap and secret to configure

ConfigMaps and Secrets are Kubernetes resources allowing to manage the Pods configuration. The configuration is described as a set of key-values. These configurations are then injected into the Pods as environment variables or as configuration files mounted on the the containers. The use of these resources allows to decouple the Pods description from any configuration. Whether they are written in YAML or JSON, the configurations are versionable (except for Secrets which hold sensitive information).

To go further : https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/

VI. Limit and Request to control the resources utilization

In the many Pods configuration options, it is possible to define the resources used and usable by the Pod (CPU and memory):

  • requests, this configuration is applicable to CPU or memory. It defines the minimum resources the Pod will need to run. These values are used by the scheduler at the time of the node allocation. It also enables auto-scaling. The target CPU utilisation is based on the requested CPU and the Pods autoscaler – which is also a Kubernetes resource – will automatically scale up or down the number of Pods to reach it.
  • limits, just like request, this configuration is applicable to CPU and memory. It defines the maximum amount of resources usable by the Pod. Defining these parameters prevents a failing Pod to compromise the whole cluster by consuming all the resources.

If the Kubernetes cluster administrator defined resource quotas for the Namespaces, defining requests and limits becomes mandatory, or the Pod won’t be scheduled. When these values aren’t defined, the administrator may also define default values in a K8s resource named LimitRange.

To go further: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

VII. Think of the Pods lifecycle

It is possible to deploy a Pod by describing its configuration in a YAML/JSON file and injecting it into K8s with the kubectl client. Be careful, this method does not benefit from the Pods resilience offered by K8s by design. If the Pod crashes, it won’t be automatically replaced. It is recommended to use a Deployment. This K8s object allows to write a Pod along with its configuration but it also hides the resilience complexity. Indeed, the Deployment will generate a ReplicaSet. The only goal of the ReplicaSet is to make sure that the number of running Pods matches the desired number of Pods. It also provides the abilityto scale Pods at will. The Deployment allows to configure the deployment strategies. It is for instance possible to define a rolling-update strategy in case of a new version of a Pod’s container.

The following command allows to start Pods (for instance, a Nginx):

@@ARTICLE_CONTENT@@gt; kubectl run nginx image=nginx --replicas=2

This command will generate a Deployment with a Pod running the Nginx container. The same Deployment will also generate the ReplicaSet which will ensure 2 Pods are running at any time.

To go further: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

VIII. LivenessProbe to monitor

As we saw, the ReplicaSet allows to ensure the number of running Pods matches the number of desired Pods. It restarts any failing Pods. It is also possible to configure the Pods resiliency on the functional level. For that there is the LivenessProbe option. It provides the abilityto automatically restart the Pod if the condition is not verified.

Just like LivenessProbe monitors the status of an application, the ReadinessProbe monitors when an application is available after reboot. This is useful for an application that runs tasks before it actually starts (eg. data injection).

To go further: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/

IX. Latest is not a version

K8s is a container orchestrator and the name of the container to deploy is specified in the Pod configuration. The naming of an image is composed as such:

<Registry name>/<Container name>:<tag or version>

It is a common practice to increment the image version just like you increment the version of a code base, but also to assign the tag “latest” to the last build image. Configuring a Pod to deploy an image with the tag “latest” is not a good practice for several reasons:

  • No control over the deployed version, and some possible side effects related to the new versions of the image components.
  • Latest may be buggy.
  • It is possible to configure the Pods to select an images “pull” strategy. It’s the “ImagePullPolicy” option which can have 3 values:
    • IfNotPresent: Pull image only if it isn’t locally available on the node
    • Always: Always pull image
    • Never : Never pull image

IfnotPresent is the default option, so if we use an image with the tag “latest”, Kubernetes will fetch the “latest” version image from at the first deployment. Then as it will be locally present on the node for subsequent deployments, it won’t download the image again from the registry, even if a new “latest” version image was pushed.

To go further: https://kubernetes.io/docs/concepts/configuration/overview/#container-images

X. Pods are stateless

Pods are short lived and volatile, they can be moved to other nodes in case of maintenance, deployments or reboot. They can also - and that’s a big perk of K8s like systems - scale on demand. The inbound flow to Pods is load-balanced by the Service in front of the Pods. That’s why applications hosted on K8s must use a third party provider to store data. For instance an e-commerce website storing session information as files within a container (let’s say a purchase cart) will lose data when the Pod scales or restarts.

The solution to address this issue vary depending use cases. For instance, a key-value storage service (redis, memcache) can be considered to store session data. For a file hosting application, an object storage solution such as AWS S3 will be favored.

To go further: https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/

XI. Distributed Storage Volumes

As we have seen, your applications should be stateless. You may need however to deploy stateful components requiring a storage layer such as a database. Kubernetes provides the abilityto mount volumes within the Pods. It then becomes possible to mount a volume provided by AWS, Azure of Google’s storage services. The storage is then external of your cluster and remains attached to the Pod in case of a redeployment to a different node. It is also possible to mount a volume between the node where the Pod is deployed and the Pod itself. But this solution should not be considered. Indeed, your Pod and its volume attached to the host may be migrated and lose all its data.

To go further: https://kubernetes.io/docs/concepts/storage/volumes/

XII. My applications are 12 factors apps

The application code that will eventually be deployed to a Kubernetes cluster has to respect a set of rules. The 12 factors apps are a set of advice/good practices created by Heroku. Heroku is a PaaS provider hosting applications as containers, and these principles are a way to best operate code meant to be containerized.

The main recommendations are:

  • Code versioning (GIT)
  • Providing a health check URL
  • Stateless application
  • Environment variable based configuration
  • Log output to standard or error output
  • Degraded mode management
  • Graceful start/stop

To go further: https://blog.octo.com/applications-node-js-a-12-facteurs-partie-1-une-base-de-code-saine/