Kubernetes Helm features I would wish to know from day one

Kubernetes Helm is a package manager for Kubernetes deployments. It is one of the possible tools for deployment management on Kubernetes platform. You can imagine it as an RPM in Linux world with package management on top of it like an apt-get utility.

Helm release management, ability to install or rollback to a previous revision, is one of the strongest selling points of Helm and together with strong community support makes it an exciting option. Especially the number of prepared packages is amazing and make it extremely easy to bootstrap a tech stack on the kubernetes cluster. But this article is not supposed to be a comparison between the kubernetes tools but instead describing an experience I’ve made while working with it and finding the limitations which for some else might be quite ok but having an ability to use the tool in those scenarios might be an additional benefit.
Helm is written in GO lang with the usage of GO templates which brings some limitation to the tool. Helm works on the level of string literals, and you need to take care of quotation, indentation etc. to form a valid kubernetes deployment manifest. This is a strong design decision from the creators of the Helm, and it is good to be aware of it. Secondly, Helm merges two responsibilities: To render a template and to provide kubernetes manifest. Though you can nest templates or create a template hierarchy, the rendered result must be a Kubernetes manifest which somehow limits possible use cases for the tool. Having the ability to render any template would extend tool capabilities as quite often the kubernetes deployment descriptors contain some configuration where you would appreciate type validation or possibility to render it separately. At the time of writing this article, the Helm version 2.11 didn’t allow that. To achieve this, you can combine Helm with other tools like Jsonnet and tie those together.
While combining different tools, I found following Helm advanced features quite useful which greatly simplified and provide some structure to the resulting Helm template. Following list enumerates those which I found quite useful.

Template nesting (named templates)
Sub-templates can be structured in helper files starting with an underscore, e.g. `_configuration.tpl` as files starting with underscore doesn’t need to contain valid kubernetes manifest.

{{- define "template.to.include" -}}
The content of the template
{{- end -}}

To use the sub-template, you can use

{{ include "template.to.include" .  -}}

Where “.” passes the actual context. When there are problems with the context trick with $ will solve the issue.

To include a file all you need is specify a path

{{ $.Files.Get "application.conf" -}}

Embedding file as configuration

{{ (.Files.Glob "application.conf").AsConfig  }}

it generates key-value automatically which is great when declaring a configMap

To provide a reasonable error message and make some config values mandatory, use required function

{{ required "Error message if value not specified" .Values.component.port }}

Accessing a key from value file which contains a dot e.g. application.conf

{{- index .Values.configuration "application.conf" }}

IF statements combining multiple values

{{- if or (.Values.boolean1) (.Values.boolean2) (.Values.boolean3) }}

Wrapping reference to value into braces solves the issue.

I hope that you found tips useful and if you have any suggestions let me know in the comment section below.

Advertisements

Apache Kafka foundation of modern data stream processing

Working on the next project using again awesome Apache Kafka and again fighting against a fundamental misunderstanding of the philosophy of this technology which probably usually comes from previous experience using traditional messaging systems. This blog post aims to make the mindset switch as easy as possible and to understand where this technology fits in. What pitfalls to be aware off and how to avoid them. On the other hand, this article doesn’t try to cover all or goes into much detail.

Apache Kafka is system optimized for writes – essentially to keep up with whatever speed or amount producer sends. This technology can be configured to meet any required parameters. That is one of the motivations behind naming this technology after famous writer Franz Kafka. If you want to understand the philosophy of this technology you have to take a look with a fresh eye. Forget what you know from JMS, RabbitMQ, ZeroMQ, AMQP and others. Even though the usage patterns are similar internal workings are completely different – the opposite. Following table provides a quick comparison

JMS, RabbitMQ, …
Apache Kafka
Push model
Pull model
Persistent message with TTL
Retention Policy
Guaranteed delivery
Guaranteed “Consumability”
Hard to scale
Scalable
Fault tolerance – Active – passive
Fault tolerance – ISR (In Sync Replicas)

Core ideas in Apache Kafka come from RDBMS. I wouldn’t describe Kafka as a messaging system but rather as a distributed database commit log which in order to scale can be partitioned. Once the information is written to the commit log everybody interested can read it at its own pace and responsibility. It is consumers responsibility to read it not the responsibility of the system to deliver the information to the consumer. This is the fundamental twist. Information stays in the commit log for a limited time given by retention policy applied. During this period it can be consumed even multiple times by consumers. As the system has reduced set of responsibilities it is much easier to scale. It is also really fast – as sequence read from the disk is similar to random access memory read thanks to effective file system caching.

kafkaoffsets

Topic partition is a basic unit of scalability when scaling out Kafka. Message in Kafka is simple key-value pair represented as byte arrays. When message producer is sending a message to Kafka topic a client partitioner decides to which topic partition message is persisted based on message key. It is a best practice that messages that belong to the same logical group are sent to the same partition.  As that guarantee clear ordering. On the client side, exact position of the client is maintained on per topic partition bases for the assigned consumer group. So point to point communication is achieved by using exactly the same consumer group id when clients are reading from the topic partition. While publish-subscribe is achieved by using distinct consumer group id for each client to topic partition. The offset is maintained for consumer group id and topic partition and can be reset if needed.

kafkacommunication

Topic partitions can be replicated zero or n times and distributed across the Kafka cluster. Each topic partition has one leader and zero or n followers depends on replication factor. The leader maintains so-called In Sync Replicas (ISR) defined by delay behind the partition leader is lower than replica.lag.max.ms. Apache Zookeeper is used for keeping metadata and offsets.

kafkacluster

Kafka defines fault tolerance in following terms:
  • acknowledge – broker acknowledge to producer message write
  • commit – the message is written to all ISR and consumer can read
While producer sends messages to Kafka it can require different levels of consistency:
  • 0 – producer doesn’t wait for confirmation
  • 1 – wait for acknowledge from the leader
  • ALL – wait for acknowledge from all ISR ~ message commit

Apache Kafka is quite flexible in configuration and as such, it can meet many different requirements in terms of throughput, consistency and scalability. Replication of topic partition brings read scalability on the consumer side but also poses some risk as it is some additional level of complexity to achieve this. If you are unaware of those corner cases it might lead to nasty surprises, especially for newcomers. So let’s take a closer look at following scenario.

We have topic partition with a replication factor 2. Producer requires highest consistency level, set to ack = all. Replica 1 is currently the leader. Message 10 is committed hence available to clients. Message 11 is not acknowledged nor committed due to the failure of replica 3. Replica 3 will be eliminated from ISR or put offline. That causes that message 11 becomes acknowledged and committed.

kafka_uc1

Next time we lose Replica 2 it is eliminated from ISR and the same situation repeats for messages 12 and 13.
kafka_uc2.png
The situation can still be a lot worse if cluster loses current partition leader – Replica 1 is down now.
kafka_uc3
What happens if Replica 2 or Replica 3 goes back online before Replica 1? One of those becomes a new partition leader and we lost data messages 12 and 13 for sure!
kafka_uc4

Is that a problem? Well, the correct answer is: It depends. There are scenarios where this behaviour is perfectly fine. Imagine collecting logs from all machines via sending them through Kafka. On the other hand, if we implement event sourcing and we just lost some events that we cannot recreate the application state correctly. Yes, we have a problem! Unfortunately, if that doesn’t change in latest releases, that is default configuration if you just install new fresh Kafka cluster. It is a set up which favour availability and throughput over other factors. But Kafka allows you to set it up in a way that it meets your requirements for consistency as well but will sacrifice some availability in order to achieve that (CAP theorem). To avoid the described scenario you should use the following configuration. The producer should require acknowledging level ALL. Do not allow kafka perform a new leader election for dirty replicas – use settings unclean.leader.election.enable = false. Use replication factor (default.replication.factor = 3) and require minimal number of replicas to be in sync state to higher than 1 (min.insync.replicas = 2).

We already quickly touched the topic of message delivery to the consumer. Kafka doesn’t guarantee that message was delivered to all consumers. It is the responsibility of the consumers to read messages. So there is no semantics of persistent message as known from traditional messaging systems. All messages sent to Kafka are persistent meaning available for consumption by clients according to the retention policy. Retention policy essentially specifies how long the message will be available in Kafka. Currently, there are two basic concepts – limited by space used for keeping messages or time for which the message should be at least available. The one which gets violated first wins.

When I need to clean the data from the Kafka (triggered by retention policy) there are two options. The simplest one just deletes the message. Or I can compact messages. Compaction is a process where for each message key is just one message, usually the latest one. That is actually the second semantics of key used in the message.

What features you cannot find in Apache Kafka compared to traditional messaging technologies? Probably the most significant is an absence of any selector in combination with listening (wake me on receive). For sure can be implemented via correlation id, but efficiency is on the completely different level. You have to read all messages, deserialize those and filter. Compared to a traditional selector which uses the custom field in message header where you don’t need even to deserialize message payload that is on the completely different level. Monitoring Kafka on production environment essentially concerns elementary question: Are the consumers fast enough? Hence monitoring consumers offsets with respect to the retention policy.

Kafka was created on LinkedIn to solve a specific problem of modern data-driven application to fill the gap in traditional ETL processes usually working with flat files and DB dumps. It is essentially enterprise service bus for data where software components need exchange data heavily. It unifies and decouples data exchange among components. Typical uses are in “BigData” pipeline together with Hadoop and Spark in lambda or kappa architecture.  It lays down foundations of modern data stream processing.

This post just scratches basic concepts in Apache Kafka. If you are interested in details I really suggest to read following sources which I found quite useful on my way when learning Kafka:

Prague Java Developer Day 2012 Highlights

Java philosophy from very first beginning  is “compile once and run everywhere” this seems to be strengthen even more for the next version of java. The key message for java 8 is “write code once and run everywhere” which implies blurring the edge between Java SE and Java ME. The move of Java towards smartphones and tablets etc. is clear. The approach and impact to language constructs will be described briefly as was presented at the conference. As presented nothing is cut in stone at the moment but the main objective is clear.
Huge effort is being spent on a new java modularization system which would reduce an amount of consumed memory by JVM, reduce the size of final archives etc. The solution should be backward compatible with some question to current organization of JDK and potential reorganization. The solution relies on creating of new logical units composed of existing packages, classes etc. Details can  be found on project pages – Project Jigsaw.
JavaFx as a client rich platform went through a huge rewrite with version 2.0. Now supports full interoperability with Java Swing library.JavaFx scene builder released for major platforms.
Java 7 made next step towards better parallelization with fork-join framework which helps you take advantage of multiple processors. Java 8 should move the matters even further with embedding functional style programming with lambda expressions – project Lambda.
The last main feature presented for Java 8 was Type Anonotations as @Nullable, @NotNull etc. This feature is highly desirable by community as this allows better static code analysis. More info can be found here.
The afore mentioned list is neither an extensive list of features nor a final list of enhancements in java 8 but rather a plan.