CD with Spinnaker – evaluation

Spinnaker one of the popular continuous delivery platform originally developed in Netflix. I am evaluating a version 1.23.5 . Spinnaker is a multi-cloud continuous delivery platform supporting VM and Kubernetes based deployments (server-less under development). Extensible platform with HA setup possible. This post is supposed to be part of the bigger series with a unified structure.

Overview:
Spinnaker Architecture
Spinnaker basic concepts (Spinnaker started for VM deployments, Kubernetes concepts mapped to it in provider)
Pipeline stages
– Support for manual Judgement stage though no detailed permission model for actions (non OSS plugins exists e.g. Armory)
– Nesting pipeline supported (either fire and forget or wait for completion)
Custom stages development (Rest call, Kubernetes job or Jenkins job, …)
– Development of new stage

Authentication & Authorisation (Spinnaker security concepts):
Spinnaker Authentication
Spinnaker Authorisation with Role Based Access
– Spinnaker can be accessed through GCP Identity Aware Proxy (or other service on different cloud providers)
– Authentication G-Suite identity provider or GitHub teams. Other options exist as well, see overview here.
– Authorisation with Google Groups (only support flat structure, role = name of the group), GitHub teams , raw mapping or others
Pipelines are versioned automatically
Pipeline triggers
– Concept of providers which integrates pipelines with target platform or cloud providers, e.g. Kubernetes provider v2
– Support for complex deployment strategies
– Management CLI – Halyard (spinnaker configuration) and Spinn for pipeline management
– Deployment to Kubernetes in the form of native manifests, Helm packages transformed in Helm Bake Stage to native manifests (using native Helm support for templating)
– Terraform stage as a custom stage e.g. oss implementation
– Wide variety of notification options
– Monitoring support via Prometheus
Backup configuration to storage

Pricing:
– There is no price for Spinnaker itself only for resources consumed when deployed
– Requires VMs, Redis or CloudSql(Postgress)
– Loadbalancer
Spinnaker for GCP if you are running on GCP, where you pay for resources needed only.

Resources:
https://spinnaker.io/
https://www.slideshare.net/Pivotal/modern-devops-with-spinnaker-olga-kundzich
https://spinnaker.io/concepts/ebook/

Summary:
Tool with focus on CD with manual approval stages, security model which makes it SOC2 compliant. Good audit-ability in place (possible to integrate to GCP audit log). Scripted stages and manual approval stage is possible to specify just a group. It is done on application/ pipeline level. Tool eliminate Helm from kubernetes cluster as it works based on Kubernetes native manifest. Propagates Immutable infrastructure as those artefacts are stored for possible rollbacks.  Authorisation/Authentication seems to be a complex but variable to integrate with wide variety of the systems. Pretty active user group, offering help. Pricing is based on resources used.

Complete Guide: Kubernetes Helm operational modes (2021 Guide)

Helm, a package manager for kubernetes, went through some evolution in the past several years. It evolved from Helm 2 to Helm 3, where helm 2 went through end-of-life nearly a year ago so I would be pretty late to the party. Without going to deep into helm internals I would mention just main feature. The removal of Tiller, a component that acted as a middle man and caused many troubles (requiring cluster around for many helm commands, security as tiller run as Kubernetes RBAC cluster-admin, etc.) is now gone! And many more, if interested official pages provide a good summary of helm 3). In this short blog post, I would like to give a quick overview of how Helm 3 works from a high-level perspective and what are the potential Helm operational modes and risks associated.
Helm 3 architecture is lightweight (compared to helm 2) schematically described in the following picture.

  • Helm binary installed on the client machine interacting with kubernetes cluster api. 
  • Helm metadata objects stored either as ConfigMap or Secret kubernetes object (depends on the configuration options)

Helm binary provides a cli to helm. The main function is to render manifests based on the Helm manifests templates and apply them to kubernetes cluster with preserving the revision history for possible rollbacks via helm cli. In addition, helm metadata objects whose payload is a serialized protocol buffer contains all data to render requested kubernetes manifests based on helm package specification and provided value file which acts as variables to helm package. You can list a history of helm release via (e.g. prometheus deployment in namespace prometheus):

$ helm history prometheus -n prometheus
REVISION	UPDATED                 	STATUS    	CHART            	APP VERSION	DESCRIPTION
113     	Fri Apr 23 12:55:11 2021	superseded	prometheus-11.0.0	2.16.0     	Upgrade complete
114     	Wed Apr 28 13:18:29 2021	superseded	prometheus-11.0.0	2.16.0     	Upgrade complete
115     	Wed Apr 28 13:49:13 2021	superseded	prometheus-11.0.0	2.16.0     	Upgrade complete
116     	Wed Apr 28 15:23:38 2021	superseded	prometheus-11.0.0	2.16.0     	Upgrade complete
117     	Wed Apr 28 17:03:15 2021	superseded	prometheus-11.0.0	2.16.0     	Upgrade complete
118     	Fri Apr 30 16:50:13 2021	superseded	prometheus-11.0.0	2.16.0     	Upgrade complete
119     	Mon May  3 16:10:01 2021	superseded	prometheus-11.0.0	2.16.0     	Upgrade complete
120     	Fri May  7 11:49:50 2021	superseded	prometheus-11.0.0	2.16.0     	Upgrade complete
121     	Fri May 14 15:06:13 2021	superseded	prometheus-11.0.0	2.16.0     	Upgrade complete
122     	Thu May 20 10:45:56 2021	deployed  	prometheus-11.0.0	2.16.0     	Upgrade complete

and find corresponding secrets in the namespace where the package is being applied

$ kubectl get secrets -n prometheus
NAME                                                  TYPE                                  DATA   AGE
sh.helm.release.v1.prometheus.v113                    helm.sh/release.v1                    1      52d
sh.helm.release.v1.prometheus.v114                    helm.sh/release.v1                    1      47d
sh.helm.release.v1.prometheus.v115                    helm.sh/release.v1                    1      47d
sh.helm.release.v1.prometheus.v116                    helm.sh/release.v1                    1      47d
sh.helm.release.v1.prometheus.v117                    helm.sh/release.v1                    1      47d
sh.helm.release.v1.prometheus.v118                    helm.sh/release.v1                    1      45d
sh.helm.release.v1.prometheus.v119                    helm.sh/release.v1                    1      42d
sh.helm.release.v1.prometheus.v120                    helm.sh/release.v1                    1      38d
sh.helm.release.v1.prometheus.v121                    helm.sh/release.v1                    1      31d
sh.helm.release.v1.prometheus.v122                    helm.sh/release.v1                    1      25d

When you upgrade or install package, Helm render the manifests (performs manifest api validation) and apply them to kubernetes cluster api. Kubernetes api performs upgrades, fill in missing default options, may silently ignore unrecognised or wrong settings (depends on configuration). Lower level kubernetes objects are derived from higher-level objects, e.g. replica sets from deployments. All this results in a manifest that is actually running in the kubernetes cluster as depicted in the picture. When you ask Helm to provide a manifest via

helm get manifest release-name -n namespace

Will provide deployed kubernetes manifest. Requested kubernetes manifest that reside in kubernetes Helm secret metadata. That is not exactly what is running in the cluster. To get the manifests that are actually running in the cluster.

kubectl get all -n namespace

This command will provide all kubernetes objects, including derived that are running in the cluster. By comparing those two, you can see differences. If you consider that kubernetes cluster is over time upgraded as well that you realize that what is actually running in the cluster can be actually surprising. That surprise usually manifests during disaster recovery. That is the reason why is it is highly preferred to eliminate Helm abstractions for the deployment chain. Kubernetes natively supports versioning via the revision history feature, which is specific to kubernetes object, e.g. deployment.

kubectl rollout history deployment deployment_name

This command captures only actual differences in the deployment manifest, so the revision count might not be the same as the number of revisions from Helm. Also, Helm revision history bundles all the kubernetes objects into a single revision.
To move to deployments of native kubernetes manifests, Helm offers a feature to render the manifests using a package and value file via helm template command. That completes the picture of how Helm operates. A simplified view could be summarised into:
1. render manifests
2. kubectl apply rendered manifests
3. store the helm revision metadata

Helm 3 provides good flexibility for deployments and leaves important decisions to SREs while keeping access to community-maintained packages.

TIP: If you develop some helm packages in-house, adding a helm lint command to your PR checks allows discovering issues during the PR review process. Don’t forget to check helm advanced features

If you like the content or have some experiences or questions don’t forget to leave a comment bellow and follow me on Twitter.

Processing…
Success! You're on the list.

Kubernetes Helm features I would wish to know from day one

Kubernetes Helm is a package manager for Kubernetes deployments. It is one of the possible tools for deployment management on Kubernetes platform. You can imagine it as an RPM in Linux world with package management on top of it like an apt-get utility.

Helm release management, ability to install or rollback to a previous revision, is one of the strongest selling points of Helm and together with strong community support makes it an exciting option. Especially the number of prepared packages is amazing and make it extremely easy to bootstrap a tech stack on the kubernetes cluster. But this article is not supposed to be a comparison between the kubernetes tools but instead describing an experience I’ve made while working with it and finding the limitations which for some else might be quite ok but having an ability to use the tool in those scenarios might be an additional benefit.
Helm is written in GO lang with the usage of GO templates which brings some limitation to the tool. Helm works on the level of string literals, and you need to take care of quotation, indentation etc. to form a valid kubernetes deployment manifest. This is a strong design decision from the creators of the Helm, and it is good to be aware of it. Secondly, Helm merges two responsibilities: To render a template and to provide kubernetes manifest. Though you can nest templates or create a template hierarchy, the rendered result must be a Kubernetes manifest which somehow limits possible use cases for the tool. Having the ability to render any template would extend tool capabilities as quite often the kubernetes deployment descriptors contain some configuration where you would appreciate type validation or possibility to render it separately. At the time of writing this article, the Helm version 2.11 didn’t allow that. To achieve this, you can combine Helm with other tools like Jsonnet and tie those together.
While combining different tools, I found following Helm advanced features quite useful which greatly simplified and provide some structure to the resulting Helm template. Following list enumerates those which I found quite useful.

Template nesting (named templates)
Sub-templates can be structured in helper files starting with an underscore, e.g. `_configuration.tpl` as files starting with underscore doesn’t need to contain valid kubernetes manifest.

{{- define "template.to.include" -}}
The content of the template
{{- end -}}

To use the sub-template, you can use

{{ include "template.to.include" .  -}}

Where “.” passes the actual context. When there are problems with the context trick with $ will solve the issue.

To include a file all you need is specify a path

{{ $.Files.Get "application.conf" -}}

Embedding file as configuration

{{ (.Files.Glob "application.conf").AsConfig  }}

it generates key-value automatically which is great when declaring a configMap

To provide a reasonable error message and make some config values mandatory, use required function

{{ required "Error message if value not specified" .Values.component.port }}

Accessing a key from value file which contains a dot e.g. application.conf

{{- index .Values.configuration "application.conf" }}

IF statements combining multiple values

{{- if or (.Values.boolean1) (.Values.boolean2) (.Values.boolean3) }}

Wrapping reference to value into braces solves the issue.

I hope that you found tips useful and if you have any suggestions leave the comment below or you can reach me for further questions on twitter.

Processing…
Success! You're on the list.