Helm, a package manager for kubernetes, went through some evolution in the past several years. It evolved from Helm 2 to Helm 3, where helm 2 went through end-of-life nearly a year ago so I would be pretty late to the party. Without going too deep into helm internals I would mention just the main feature.
Kubernetes Helm 3 main differences
Helm tiller, goodbye we won’t miss you
The removal of Tiller, a component that acted as a middle man and caused many troubles (requiring cluster around for many helm commands, security as tiller run as Kubernetes RBAC cluster-admin, etc.) is now gone! And many more, if interested official pages provide a good summary of helm 3). In this short blog post, I would like to give a quick overview of how Helm 3 works from a high-level perspective and what are the potential Helm operational modes and risks associated.
Helm 3 Architecture overview
Helm 3 architecture is lightweight (compared to helm 2) schematically described in the following picture.
- Helm binary installed on the client machine interacting with kubernetes cluster api.
- Helm metadata objects stored either as ConfigMap or Secret kubernetes object (depends on the configuration options)
Helm binary provides a cli to helm. The main function is to render manifests based on the Helm manifests templates and apply them to kubernetes cluster with preserving the revision history for possible rollbacks via helm cli. In addition, helm metadata objects whose payload is a serialized protocol buffer contains all data to render requested kubernetes manifests based on helm package specification and provided value file which acts as variables to helm package. You can list a history of helm release via (e.g. prometheus deployment in namespace prometheus):
$ helm history prometheus -n prometheus REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION 113 Fri Apr 23 12:55:11 2021 superseded prometheus-11.0.0 2.16.0 Upgrade complete 114 Wed Apr 28 13:18:29 2021 superseded prometheus-11.0.0 2.16.0 Upgrade complete 115 Wed Apr 28 13:49:13 2021 superseded prometheus-11.0.0 2.16.0 Upgrade complete 116 Wed Apr 28 15:23:38 2021 superseded prometheus-11.0.0 2.16.0 Upgrade complete 117 Wed Apr 28 17:03:15 2021 superseded prometheus-11.0.0 2.16.0 Upgrade complete 118 Fri Apr 30 16:50:13 2021 superseded prometheus-11.0.0 2.16.0 Upgrade complete 119 Mon May 3 16:10:01 2021 superseded prometheus-11.0.0 2.16.0 Upgrade complete 120 Fri May 7 11:49:50 2021 superseded prometheus-11.0.0 2.16.0 Upgrade complete 121 Fri May 14 15:06:13 2021 superseded prometheus-11.0.0 2.16.0 Upgrade complete 122 Thu May 20 10:45:56 2021 deployed prometheus-11.0.0 2.16.0 Upgrade complete
and find corresponding secrets in the namespace where the package is being applied
$ kubectl get secrets -n prometheus NAME TYPE DATA AGE sh.helm.release.v1.prometheus.v113 helm.sh/release.v1 1 52d sh.helm.release.v1.prometheus.v114 helm.sh/release.v1 1 47d sh.helm.release.v1.prometheus.v115 helm.sh/release.v1 1 47d sh.helm.release.v1.prometheus.v116 helm.sh/release.v1 1 47d sh.helm.release.v1.prometheus.v117 helm.sh/release.v1 1 47d sh.helm.release.v1.prometheus.v118 helm.sh/release.v1 1 45d sh.helm.release.v1.prometheus.v119 helm.sh/release.v1 1 42d sh.helm.release.v1.prometheus.v120 helm.sh/release.v1 1 38d sh.helm.release.v1.prometheus.v121 helm.sh/release.v1 1 31d sh.helm.release.v1.prometheus.v122 helm.sh/release.v1 1 25d
Helm operational modes
Helm package management mode
When you upgrade or install package, Helm render the manifests (performs manifest api validation) and apply them to kubernetes cluster api. Kubernetes api performs upgrades, fill in missing default options, may silently ignore unrecognised or wrong settings (depends on configuration). Lower level kubernetes objects are derived from higher-level objects, e.g. replica sets from deployments. All this results in a manifest that is actually running in the kubernetes cluster as depicted in the picture. When you ask Helm to provide a manifest via
helm get manifest release-name -n namespace
Will provide deployed kubernetes manifest. Requested kubernetes manifest that reside in kubernetes Helm secret metadata. That is not exactly what is running in the cluster. To get the manifests that are actually running in the cluster.
kubectl get all -n namespace
This command will provide all kubernetes objects, including derived that are running in the cluster. By comparing those two, you can see differences. If you consider that kubernetes cluster is over time upgraded as well that you realize that what is actually running in the cluster can be actually surprising. That surprise usually manifests during disaster recovery. That is the reason why is it is highly preferred to eliminate Helm abstractions for the deployment chain. Kubernetes natively supports versioning via the
revision history feature, which is specific to kubernetes object, e.g. deployment.
kubectl rollout history deployment deployment_name
This command captures only actual differences in the deployment manifest, so the revision count might not be the same as the number of revisions from Helm. Also, Helm revision history bundles all the kubernetes objects into a single revision.
Helm templating mode
To move to deployments of native kubernetes manifests, Helm offers a feature to render the manifests using a package and value file via
helm template command. That completes the picture of how Helm operates. A simplified view could be summarised into:
1. render manifests
2. kubectl apply rendered manifests
3. store the helm revision metadata
Helm 3 provides good flexibility for deployments and leaves important decisions to SREs while keeping access to community-maintained packages.
Do you use Helm at all? What is your preferred way? Share your experience here or reach me at @jak_sky and don’t forget to join.