How to optimise a code to be JIT friendly

In the previous blog post, we measured the effect of simplest JVM JIT optimisation technique of method inlining. The code example was a bit unnatural as it was super simple Scala code just for demonstration purposes of method inlining. In this post, I would like to share a general approach I am using when I want to check how JIT treats my code or if there is some possibility to improve the code performance in regards to JIT. Even the method inlining requires the code to meet certain criteria as bytecode length of inlined methods etc. For this purpose, I am regularly using great OpenJDK project called JITWatch which comes with a bunch of handy tools in regard to JIT. I am pretty sure that there is probably more tools and I will be more than happy if you can share your approaches when dealing with JIT in the comment section below the article.
Java HotSpot is able to produce a very detailed log of what the JIT compiler is exactly doing and why. Unfortunately, the resulting log is very complex and difficult to read. Reading this log would require an understanding of the techniques and theory that underline JIT compilation. A free tool like JITWatch process those logs and abstract this complexity away from the user.








In order to produce log suitable for JIT Watch investigation the tested application needs to be run with following JVM flags:
-XX:+UnlockDiagnosticVMOptions

-XX:+LogCompilation

-XX:+TraceClassLoading
those settings will produce log file hotspot_pidXXXXX.log. For purpose of this article, I re-used code from the previous blog located on my GitHub account with JVM flags enabled in build.sbt.
In order to look into generated machine code in JITWatch we need to install HotSpot Disassembler (HSDIS) to install it to $JAVA_HOME/jre/lib/server/. For Mac OS X that can be used from here and try renaming it to hsdis-amd64-dylib. In order to include machine code into generated JIT log we need to add JVM flag -XX:+PrintAssembly.
[info] 0x0000000103e5473d: test %r13,%r13
[info] 0x0000000103e54740: jne 0x0000000103e5472a
[info] 0x0000000103e54742: mov $0xfffffff6,%esi
[info] 0x0000000103e54747: mov %r14d,%ebp
[info] 0x0000000103e5474a: nop
[info] 0x0000000103e5474b: callq 0x0000000103d431a0 ; OopMap{off=112}
[info] ;*invokevirtual inc
[info] ; - com.jaksky.jvm.tests.jit.IncWhile::testJit@12 (line 19)
[info] ; {runtime_call}
[info] 0x0000000103e54750: callq 0x0000000102e85c18 ;*invokevirtual inc
[info] ; - com.jaksky.jvm.tests.jit.IncWhile::testJit@12 (line 19)
[info] ; {runtime_call}
[info] 0x0000000103e54755: xor %r13d,%r13d
We run the JITWatch via ./launchUI.sh
JITWATCH_config
to configure source files and target generated class files
JITWatch_configuration

And finally, open prepared JIT log and hit Start.

The most interesting from our perspective is TriView where we can see the source code, JVM bytecode and native code. For this particular example we disabled method inlining via JVM Flag “-XX:CompileCommand=dontinline, com/jaksky/jvm/tests/jit/IncWhile.inc

JITWatch_notinlined
To just compare with the case when the method body of IncWhile.inc is inlined – native code size is greater 216 compared to 168 with the same bytecode size.
JITWatch-inlined
Compile Chain provides also a great view of what is happening with the code
JITWatch_compileChain
Inlining report provides a great overview what is happening with the code
JITWatch-inlining
As it can be seen the effect of tiered compilation as described in JIT optimisation starts with client C1 JIT optimisation and then switches to server C2 optimisation. The same or even better view can be found on Compiler Thread activity which provides a timeline view. To refresh memory check overview of JVM threads. Note: standard java code is subject to JIT optimizations too that’s why so many compilation activities here.
JITWatch_compilerThreads
JITWatch is a really awesome tool and provides many others views which don’t make sense to screenshot all e.g. cache code allocation, nmethodes etc. For detail information, I really suggest reading JITWatch wiki pages.  Now the question is how to write JIT friendly code? Here pure jewel of JITWatch comes in: Suggestion Tool. That is why I like JITWatch so much. For demonstration, I selected somewhat more complex problem – N Queens problem.
JITWatch_suggestion
Suggestion tool clearly describes why certain compilations failed and what was the exact reason. It is a coincidence that in this example we hit again just inlining as there is definitely more going on in JIT but this window provides a clear view of how we can possibly help JIT.
Another great tool which is also a part of JITWatch is JarScan Tool. This utility will scan a list of jars and count bytecode size of every method and constructor. The purpose of this utility is to highlight the methods that are bigger than HotSpot threshold for inlining hot methods (default 35 bytes) so it provides hints where to focus benchmarking to see whether decomposing code into smaller methods brings some performance gain. Hotness of the method is determined by the set of heuristics including call frequency etc. But what can eliminate the method from inlining is its size. For sure just the method size it too big breaching some limit for inlining doesn’t automatically mean that method is a performance bottleneck. JarScan tool is static analysis tool which has no knowledge of runtime statistics hence real method hotness.
jakub@MBook ~/Development/GIT_REPO (master) $ ./jarScan.sh --mode=maxMethodSize --limit=35 ./chess-challenge/target/scala-2.12/classes/
"cz.jaksky.chesschallenge","ChessChallange$","delayedEndpoint$cz$jaksky$chesschallenge$ChessChallange$1","",1281
"cz.jaksky.chesschallenge.solver","ChessBoardSolver$","placeFigures$1","scala.collection.immutable.List,scala.collection.immutable.Set",110
"cz.jaksky.chesschallenge.solver","ChessBoardSolver$","visualizeSolution","scala.collection.immutable.Set,int,int",102
"cz.jaksky.chesschallenge.domain","Knight","check","cz.jaksky.chesschallenge.domain.Position,cz.jaksky.chesschallenge.domain.Position",81
"cz.jaksky.chesschallenge.domain","Queen","equals","java.lang.Object",73
"cz.jaksky.chesschallenge.domain","Rook","equals","java.lang.Object",73
"cz.jaksky.chesschallenge.domain","Bishop","equals","java.lang.Object",73
"cz.jaksky.chesschallenge.domain","Knight","equals","java.lang.Object",73
"cz.jaksky.chesschallenge.domain","King","equals","java.lang.Object",73
"cz.jaksky.chesschallenge.domain","Position","Position","int,int",73
"cz.jaksky.chesschallenge.domain","Position","equals","java.lang.Object",72
To wrap up, JITWatch is a great tool which provides insight into HotSpot JIT compilations happening during program execution and it can help you to understand how a decision made at the source code level can affect the performance of the program.
Advertisements

JVM JIT compilation as a way of performance optimisation

Previous article structure of JVM – java memory model briefly mentions bytecode executions modes and article JVM internal threads provides additional insight into the internal architecture of JVM execution. In this article, we focus on Just In Time compilation and on some of its basic optimisation techniques. We also discuss performance impact of one optimisation technique namely method inlining. In the remainder of this article we focus solely on HotSpot JVM, however, principles are valid in general.
HotSpot JVM is a mixed-mode VM which means that it starts off interpreting the bytecode, but it can compile code into very highly optimised native machine code for faster execution. This optimised code runs extremely fast and performance can be compared with C/C++ code.  JIT compilation happens on method basis during runtime after the method has been run a number of times and considered as a hot method. The compilation into machine code happens on a separate JVM thread and will not interrupt the execution of the program. While the compiler thread is compiling a hot method JVM keeps on using the interpreted version of the method until the compiled version is ready.  Thanks to code runtime characteristics HotSpot JVM can make a sophisticated decision about how to optimise the code.
Java HotSpot VM is capable of running in two separate modes (C1 and C2) and each mode has a different situation in which it is usually preferred:
  • C1 (-client) – used for application where quick startup and solid optimization are needed, typically GUI application are good candidates.
  • C2 (-server) – for long running server application
Those two compiler modes use different techniques for JIT compilation so it is possible to get for the same method very different machine code. Modern java application can take advantage of both compilation modes and starting from Java SE 7 feature called tiered compilation is available.  An application starts with C2 compilation which enables fast startup and once the application is warmed up compiler C2 takes over. Since Java SE 8 tiered compilation is a default. Server optimisation is more aggressive based on assumptions which may not always hold. These optimizations are always protected with guard condition to check whether the assumption is correct. If an assumption is not valid JVM reverts the optimisation and drops back to interpreted mode. In server mode HotSpot VM runs a method in interpreted mode 10 000 times before compiling it (can be adjusted via -XX:CompileThreshold=5000). Changing this threshold should be considered thoroughly as HotSpot VM works best when it can accumulate enough statistics in order to make an intelligent decision what to compile. If you wanna inspect what is compiled using-XX:PrintCompilation.
Among most common JIT compilation techniques used by HotSpot VM is method inlining, which is a practice of substituting the body of a method into the places where the method is called. This technique saves the cost of calling the method. In the HotSpot there is a limit on method size which can be substituted. Next technique commonly used is monomorphic dispatch which relies on a fact that there are paths through method code which belongs to one reference type most of the time and other paths that belong to other types. So the exact method definitions are known without checking thanks to this observation and the overhead of virtual method lookup can be eliminated. JIT compiler can emit optimised machine code which is faster. There are many other optimisation techniques as loop optimisation, dead code elimination, intrinsics and others.
The performance gain by inlining optimisation can be demonstrated on simple Scala code:
class IncWhile {

  def main(): Int = {
    var i: Int = 0
    var limit = 0

    while (limit < 1000000000) {
      i = inc(i)
      limit = limit + 1
    }
    i
  }

  def inc(i: Int): Int = i + 1
}

Where method inc is eligible for inlining as the method body is smaller than 35 bytes of JVM bytecode (actual size of inc method is 9 bytes). Inlining optimisation can be verified by looking into JIT optimised machine code.

IncWhile-inlined

Difference is obvious when compared to machine code when inlining is disabled use  –XX:CompileCommand=dontinline,com/jaksky/jvm/tests/jit/IncWhile.inc

IncWhile-dontinlineThe difference in runtime characteristics is also significant as the benchmark results show. With disabled inlining:

[info] Result "com.jaksky.jvm.tests.jit.IncWhile.main":
[info] 2112778741.540 ±(99.9%) 9778298.985 ns/op [Average]
[info] (min, avg, max) = (2040573480.000, 2112778741.540, 2192003946.000), stdev = 28831537.237
[info] CI (99.9%): [2103000442.555, 2122557040.525] (assumes normal distribution)
[info] # Run complete. Total time: 00:08:03
[info] Benchmark Mode Cnt Score Error Units
[info] IncWhile.main avgt 100 2112778741.540 ± 9778298.985 ns/op

When inlining enabled JVM JIT also capable to use next optimizations like loop optimizations which might case that our whole loop is eliminated as it is easily predictable. We would get time around 3 ns which are for 1GHz processor unreal to perform billions of operations. To disable most of loop optimizations use -XX:LoopOptsCount=0 JVM option.

[info] Result "com.jaksky.jvm.tests.jit.IncWhile.main":
[info] 332699064.778 ±(99.9%) 3485503.823 ns/op [Average]
[info] (min, avg, max) = (316312877.000, 332699064.778, 358738827.000), stdev = 10277087.396
[info] CI (99.9%): [329213560.955, 336184568.600] (assumes normal distribution)
[info] # Run complete. Total time: 00:04:55
[info] Benchmark Mode Cnt Score Error Units
[info] IncWhile.main avgt 100 332699064.778 ± 3485503.823 ns/op
so the performance gain by inlining a method body can be quite significant 2 seconds vs 300 milliseconds.
In this post, we discussed mechanics of Java JIT compilation and some optimisation techniques used. We particularly focused on the one of the simplest optimisation technique called method inlining. We demonstrated performance gain brought by eliminating a method call represented by invokevirtual bytecode instruction. Scala also offers a special annotation @inline which should help us with performance aspects of the code under the development. All the code for running the experiments is available online on my GitHub account.

 

HotSpot JVM internal threads

In the Structure of Java Virtual Machine we scratched the surface of a class file structure, how it is connected to java memory model via class loading process. Also, we briefly discussed bytecode structure and its execution including a short introduction to Just In Time runtime optimisation. In this post we will look more at the internals of execution engine, however, there is no ambition to substitute a detailed VM implementation documentation for HotSpot JVM but just provide enough details to gain bigger picture.

Basic Threading model in HotSpot JVM is a one to one mapping between Java threads (an instance of java.lang.Thread) and native operating system threads. The native thread is created when the Java thread is started and reclaimed once it terminates. The operating system is responsible for scheduling all threads and dispatching to any available CPU. The relationship between java threads priorities and operating system thread priorities varies across operating systems.

HotSpot provides monitors by which threads running application code may participate in mutual exclusion (mutex) protocol. The monitor is either locked or unlocked. Only one thread may own the lock at any time. Only after acquiring ownership of the monitor thread may enter the critical section protected by this monitor. Critical sections are referred as synchronised blocks delineated by synchronised keyword.

Apart from application threads, JVM contains also internal threads which can be categorised into following groups:

  • VM Thread – responsible for executing VM operations
  • Periodic task thread – thread executing periodic operations within the VM (singleton instance of WatcherThread)
  • GC threads – threads of different types to support parallel an concurrent garbage collections
  • Compiler threads – performs a compilation of bytecode to native code at runtime (C1 and C2 JIT compilation threads)
  • Signal dispatcher threads – thread waiting for processing directed signals and dispatches them to a java signal handling method

JVM_compiler_threads

VM thread spends its time waiting for requested operations to appear in the operation queue (VMOperationQueue). The operation is typically passed to VM Thread because they require the VM to reach safepoint before they can be executed. When the VM is at safepoint all threads inside the VM have been blocked and any threads executing in native code are prevented from returning to the VM while the safepoint is in progress. This means that VM operation can be executed knowing that no thread can be in the middle of modifying heap. All threads are in a state such that their Java stacks are unchanging and can be examined.

Most familiar VM operation is related to garbage collection, particularly stop-the-world phase of garbage collection that is common to many garbage collocational algorithms. Other VM operation is: thread stacks dumps, thread suspension or stopping, inspection or modification via JVMTI etc. VM operation can be synchronous or asynchronous.

Safepoints are initiated using cooperative pooling based mechanism. Thread asks: “Should I block for a safepoint?” The moment when this is happening often is during thread state transition. Threads executing interpreted code don’t usually ask the question, instead of when safepoint is requested interpreter switches to different dispatch table which includes that question. When safepoint is over the dispatched table is switched back. Once safepoint has been requested VM Thread must wait until all threads are known to be in safepoint safe state before proceeding with the operation. During safepoint thread lock is used to block any threads that were running and releasing the lock when operation completed.

Structure of Java Virtual Machine (JVM)

Java-based applications run in Java Runtime Environment (JRE) which consists of a set of Java APIs and Java Virtual Machine (JVM). The JVM loads an application via class loaders and runs it by the execution engine.

JVM runs on all kind of hardware where executes a java bytecode without changing java execution code. VM implements a Write Once Run Everywhere principle or so-called platform independence principle. Just to sum up JVM key design principles:
  • Platform independence
    • Clearly defined primitive data types – Languages like C or C++ have a size of primitive data types depending on the platform. Java is unified in that matter.
    • Fixed byte order to Big Endian (network byte order) – Intel x86 uses little endian while RISC processors use big endian. Java uses big-endian.
  • Automatic memory management – class instances are created by the user and automatically removed by Garbage Collection.
  • Stack based VM – typical computer architectures such as Intel x86 are based on registers however JVM is based on a stack.
  • Symbolic reference – all types except primitive one are referred via symbolic name instead direct memory addresses.

Java uses bytecode as an intermediate representation between source code and machine code which runs on hardware. Bytecode instruction is represented as 1 byte numbers e.g. getfield 0xb4, invokevirtual 0xb6 hence there is a maximum of 256 instructions. If the instruction doesn’t need operand so next instruction immediately follows otherwise operands follow instruction according to instruction set specification. Those instructions are contained in class files produced by java compilation. The exact structure of the class file is defined in “Java Virtual Machine specification” section 4 – class file format. After some version information there are sections like constant pools, access flags, fields info, this and super info, methods info etc. See the spec for the details.

A class loader loads compiled java bytecode to the Runtime Data Areas and the execution engine executes the java bytecode. Class is loaded when is used for the first time in the JVM. Class loading works in dynamic fashion on parent-child (hierarchical) delegation principle. Class unloading is not allowed. Some time ago I wrote an article about class loading on application server. Detail mechanics of class loading is out of scope for this article.

Runtime data areas are used during execution of the program. Some of these areas are created when JVM starts and destroyed when the JVM exits. Other data areas are per-thread – created on thread creation and destroyed on thread exit. FThe following picture is based mainly on JVM 8 internals (doesn’t include segmented code cache and dynamic linking of language introduced by JVM 9).

25AC9487-1AB9-48DE-936C-B6A9AC781637

  • Program counter – exist for one thread and has address of current executed instruction unless it is native then the PC is undefined. PC is in fact pointing to at a memory address in the Method Area.
  • Stack – JVM stack exists for one thread and holds one Frame for each method executing on that thread. It is LIFO data structure. Each stack frame has reference to for local variable array, operand stack, runtime constant pool of a class where the code being executed.
  • Native Stack –  not supported by all JVMs. If JVM is implemented using C-linkage model for JNI than stack will be C stack(order of arguments and return will be identical in the native stack typical to C program). Native methods can call back into the JVM and invoke java methods.
  • Stack Frame – stores references that points to the objects or arrays on the heap.
    • Local variable array – all the variables used during execution of the method, all method parameters and locally defined variables.
    • Operand stack – used during execution of the bytecode instruction. Most of the bytecode is manipulating operand stack moving from local variables array.
  • Heap – area shared by all threads used to allocate class instances and arrays in runtime. Heap is the subject of Garbage Collection as a way of automatic memory management used by java. This space is most often mentioned in JVM performance tuning.
  • Non-Heap memory areas
    • Method area – shared by all threads. It stores runtime constant pool information, field and method information, static variable, method bytecode for each class loaded by the JVM. Details of this area depend on JVM implementation.
      • Runtime constant pool – this area corresponds constant pool table in the class file format. It contains all references to methods and fields. When a method or field is referred  to JVM searches the actual address of the method or field in the memory by using the constant pool
      • Method and constructor code
    • Code cache – used for compilation and storage of methods compiled to native code by JIT compilation
Bytecode assigned to runtime data areas in the JVM via class loader is executed by the execution engine. Engine reads bytecode in the unit of instruction. Execution engine must change the bytecode to the language that can be executed by the machine. This can happen in one of two ways:
  1. Interpreter – read, interpret and execute bytecode instructions one by one.
  2. JIT (Just In Time) compiler – compensate disadvantage of interpretation. Start executing the code in interpreted mode and JIT compiler compile the entire bytecode to native code. Execution is then switched from interpretation to execution of native code which is much faster. Native code is stored in the cache. Compilation to native code takes time so JVM uses various metrics to decide whether to JIT compile the bytecode.

How the JVM execution engine runs is not defined by JVM specification so vendors are free to improve their JVM engines by various techniques, hotspot JVM is described in more detail and JIT watch gives insight into runtime characteristics.

More details can be found in The Java® Virtual Machine Specification – Java SE 9 Edition

 

 

Hadoop IO and file formats

In this post dedicated to Big Data I would like to summarize hadoop file formats and provide some brief introduction to this topic. As things are constantly evolving especially in the big data area I will be glad for comments in case I missed something important. Big Data framework changes but InputFormat and OutputFormat stay the same. Doesn’t matter what’s big data technology is in use, can be hadoop, spark or …

Let’s start with some basic terminology and general principles. The key term in mapreduce paradigm is split which defines a chunk of the data processed by single map. Split is further divided into the record where every record is represented as a key-value pair. That is what you actually know from mapper API as your input. The number of splits gives you essentially the number of map tasks necessary to process the data which is not in clash with the number of defined mappers for your mapreduce slots. This just means that some map tasks need to wait until the map slot is available for processing. This abstraction is hidden in IO layer particularly InputFormat or OutputFormat class which contains RecordReader, RecordWriter class responsible for further division into records. Hadoop comes with a bunch of pre-defined file formats classes e.g. TextInputFormat, DBInputFormat, CombinedInputFormat and many others. Needless to say that there is nothing which prevents you from coming with your custom file formats.

Described abstraction model is closely related to mapreduce paradigm but what is the relation to underlying storage like HDFS? First of all, mapreduce and distributed file system (DFS) are two core hadoop concepts  which are “independent” and the relation is defined just through the API between those components. The well-known DFS implementation is HDFS but there are several other possibilities(s3, azure blob, …). DFS is constructed for large datasets. The core concept in DFS is a block which represents a basic unit of the original dataset for a manipulation and processing e.g. replication etc. This fact puts also additional requirements on dataset file format: it has to be splittable – that means that you can process a given block independently from the rest of the dataset. If the file format is not splittable and you would run a mapreduce job you wouldn’t get any level of parallelism and the dataset would be processed by a single mapper. Splittability requirement also applies if the compression is desired as well.

What is the relation between a block from DFS and split from mapreduce? Both of them are essentially key abstractions for parallelization but just in different frameworks and in ideal case they are aligned. If they are perfectly aligned that hadoop can take full advantage of so-called data locality feature which runs the map or reduce tasks on a cluster node where the data resides and minimize the additional network traffic. In case of imprecise alignment, a remote reads will happen for records missing for a given split. For that reasons file formats includes sync markers or points.

To take an advantage and full power of hadoop you design your system for big files. Typically the DFS block size is 64MB but can be bigger. That means that biggest Hadoop enemy is a small file. The number of files which lives in DFS is somehow limited by the size of Name Node memory. All the datasets metadata are kept in memory. Hadoop offers several strategies how to avoid of this bad scenario. Let’s go through those file formats.

HAR file (stands for Hadoop archive) – is a specific file format which essentially packs a bunch of files into a single logical unit which is kept on name node. HAR files don’t support additional compression and as far as I know are transparent to mapreduce. Can help if name nodes are running out of memory.

Sequence file is a kind of file-based data structure. This file format is splittable as it contains a sync point after several records. The record consists of key – value and metadata. Where key and value is serialized via class whose name is kept in the metadata. Classes used for serialization needs to be on CLASSPATH.

Map file is again a kind of hadoop file based data structure and it differs from a sequence file in a matter of the order. Map file is sorted and you can perform a look up. Behavior pretty similar to java.util.Map class.

Avro data file is based on avro serialization framework which was primarily created for Hadoop. It is a splittable file format with a metadata section at the beginning and then a sequence of Avro serialized objects. Metadata section contains a schema for an Avro serialization. This format allows a comparison of data without deserialization.

Google Protocol buffers are not natively supported by Hadoop but you can plug the support via libraries as elephant-bird from twitter.

So what about file formats as XML and JSON? They are not natively splittable and so “hard” to deal.  A common practice is to store then into text file a single message per line.

For textual files needless to say that those files are the first class citizens in Hadoop. TextInputFormat and TextOutputFormat deal with those. Byte offset is used as key and the value is the line content.

This blog post just scratches the surface of Hadoop file formats but I hope that it provides a good introduction and explain the connection between two essential concepts – mapreduce and DFS. For the further reference book, Hadoop Definite guide goes into the great detail.

Software Deployment – java applications as a RPM linux package

Java applications archives as jar, war and ear files are elementary distribution blocks in the java world. At the beginning managing all of these libraries and components were a bit cumbersome and error prone as the project dependencies depends on another libraries and all those transitive dependencies creates so called dependency hell. In order to ease developers of this burden Apache Maven (maven like tools) were developed. Every artefact has so called coordinates which uniquely identifies it and all dependencies are driven by those coordinates in a recursive fashion.

Maven ease the management at the stage of the artefact development but doesn’t help that much when we want to deploy the application component. Many times that’s not such a big deal if your run time environment is clustered J2EE aplication server e.g. Weblogic cluster. You hand the ear or war over to your ops team and they deploy it to the cluster via cluster management console to all nodes at once. They need to maintain an archive of deployed components in case of roll back etc. This is the simplest case (isolated component and doesn’t solve dependencies e.g. libraries provided in the cluster etc.) where management is relatively clean but relying on the process a lot. When we consider different run time environment like run the application as a server less java process (opposed to J2EE cluster) then the stuff gets a bit more complicated even for a simplest case. Your java applications are typically distributed as a jar file and you need to distribute it to every single linux server where instance of this process is running. Apart of that standard jar file doesn’t contain dependencies. One possible solution to that would be to create shaded (fat) jar file which has all dependencies embedded. I suppose that you have a repository where all builds are archived. Does it make sense to store those big archives where the major part are 3rd party libraries? This is probably not the right way to go.

Another aspect of roll out process is ability to automate it. In case of J2EE clusters like weblogic there is often scripting tool provided (WLST ~ weblogic scripting tool). The land of pure jar is again a lot worse. You can take some advantage of maven but that doesn’t solve all the problems. Majority of production environments in java world run on linux operating system so why not to try to take advantage of linux server standard distribution management like yum, apt etc. for distributing rpm linux packages. This system provides atomicity, dependency management between rpm linux packages, easy way to roll back (keeps track of versions), minimise the number of manual steps – potential of human error is reduced and involves native auditing. It is pretty easy to get an info about installation history.

To pack your java jar file application you need a tool called rpmbuild which creates a linux package from a SPEC file. SPEC file is something like pom in maven world plus it contains instruction how to install, uninstall etc. Packages containing a required plus handy tools are rpmdevtool and rpmlint. On linux OS it is simple to install it. On Windows OS you need a cygwin installed with the same tool set. In order to build your rpm work space run the following command. It is highly recommended not to run it under root account if there is no special need for it.

rpmdev-setuptree

This command creates rpmbuild folder – that is the place where all linux RPM packaging will happen. It contains sub-folders: BUILD, RPMS, SOURCES, SPECS, SRPMS. For us the important ones are RPMS – will contain final linux rpm and SPECS – this is the place we need to put our SPEC file describing installation and content of our application.
This file is the core of the linux rpm packaging. It contains all the information about version, dependencies, installation, un-installation, upgrade etc. We can create our skeleton SPEC file by running following command:

rpmdev-newspec

Majority of directives in this file are clear from its name, e.g. Name, Version, Summary, BuildArch etc. BuildRoot require special attention. It is a sort of proxy which mimics a root of system under the construction e.g. If I want to install my [application_name] (replace this place holder with actual name) to /usr/local/[application_name] location I have to create this structure under BuildRoot during the installation. Then there are important sections which corresponds to various phases of installation: %prep, %build, %install – which is the most important for as as we do not build from sources but just pack already built jar file to linux rpm package. Last very important section of this file is %files which lists all files which will be in the final linux rpm package and hence installed in the target machine. Apart from that there can be additional hooks to installation and un-installation process as %post, %preun, %postun etc. which allows you to customize a process as you need. Sample SPEC file follows:

%define _tmppath /home/virtual/rpmbuild/tmp
Name: [application_name]
Version: 1.0.2
Release: 1%{?dist}
Summary: Processor component which feed data into DB
Group: Applications/System
License: GPL
URL: https://jaksky.wordpress.com/
BuildRoot: %{_topdir}/%{name}-%{version}-%{release}-root
BuildArch: noarch
Requires: jdk &amp;gt;= 7
%description
Component which process incoming messages and store them to DB.

%prep

%build

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/local
cp -r %{_tmppath}/[application_name] $RPM_BUILD_ROOT/usr/local
mkdir -p $RPM_BUILD_ROOT/usr/local/[application_name]/logs
mkdir -p $RPM_BUILD_ROOT/etc/init.d
cp -r %{_tmppath}/[application_name]/bin/[application_name] $RPM_BUILD_ROOT/etc/init.d
mkdir -p $RPM_BUILD_ROOT/var/run/[java application]

%files
%defattr(644,[application_name],[application_name])
%dir %attr(755, [application_name],[application_name])/usr/local/[application_name]
%dir %attr(755,[application_name],[application_name]) /usr/local/[application_name]/lib
/usr/local/[application_name]/lib/*
%attr(755,[application_name],[application_name]) /usr/local/[application_name]/logs
%dir %attr(755,[[application_name],[application_name]) /usr/local/[application_name]/conf
%config /usr/local/[application_name]/conf/[application_name]-config.xml
%config /usr/local/[application_name]/conf/log4j.properties
%dir %attr(755,[application_name],[application_name]) /usr/local/[application_name]/deploy
/usr/local/[application_name]/deploy/*
%doc /usr/local/[application_name]/README.txt
%dir %attr(755,[application_name],[application_name]) /usr/local/[application_name]/bin
%attr(755,[application_name],[application_name]) /usr/local/[application_name]/bin/*
%attr(755,root,root) /etc/init.d/[application_name]
%dir %attr(755,[application_name],[application_name]) /var/run/[application_name]

%changelog
* Wed Nov 13 2013 Jakub Stransky &amp;lt;Jakub.Stransky@jaksky.com&amp;gt; 1.0.2-1
- Bug Fixing wrong messages format
* Wed Nov 13 2013 Jakub Stransky &amp;lt;Jakub.Stransky@jaksky.com&amp;gt; 1.0.1-1
- Bug Fixing wrong messages format
* Mon Nov 11 2013 Jakub Stransky &amp;lt;Jakub.Stransky@jaksky.com&amp;gt; 1.0.0-1
- First relaese of [application_name]

Several things to highlight in the SPEC file example: tmppath points to the location where the installed application is prepared, that is essentially what is going to be packed to rpm package. %defattr set the standard attributes to files if special one are not specified. %config denotes configuration files which means that for the first installation those standard one are provided but in case of upgrade those file will not be overwritten as they are probably customized to this particular instance.
Now we are ready to create the linux rpm package just the last step is pending:

rpmbuild -v -bb --clean SPECS/nameOfTheSpecFile.spec

Created package can be found in RPMS subfolder. We can test the package locally by

rpm -i nameOfTheRpmPackage.rpm

To complete the smoke test lets remove the package by

rpm -e nameOfTheApplication

Creating a SPEC file should be pretty straightforward process and once you create your SPEC file for the application building of linux rpm package is one minute job. But if you want to automate it there is a maven plugin which generates a SPEC file for you. It is essentially wrapper of rpmbuild utility which means that plugin works fine on linux with tool set installed but on windows machine you need have cygwin installed and create wrapper bat file to mimic rpmbuild utility for the plugin. Detailed manual can be found for example here.

Couple things to highlight when creating a SPEC file. Prepare the linux package for all scenarios – install, remove, upgrade and configuration management right from the beginning. Test it properly. It can save you a lot of troubles and manual work in case of large installations. Creating a new version of java application is only about about replacing jar file, re-packaging rpm bundle.

In this quick walk through I tried to show that creating of linux rpm package as a unit for software deployment of the java application is not that difficult and can neaten a roll out process. I just scratch the surface of linux rpm packaging and I was far away from showing all capabilities of this approach. I will conclude this post by several links which I found really useful.

Great tutorial on RPM packaging in general
Good rpmbuld manual pages
Maven rpm plugin
Maximum RPM book

Java application as a Linux service

Using standard J2EE containers for application deployment is not always suitable option. Time to time you need to run an java application (jar file) as a server less, more light weight linux process. Using standard java -cp …. MainClass is feasible but sooner or latter you will reveal that there is something important missing. Especially if you are supposed to run multiple components in this way. I becomes relly messy and hard to manage pretty soon. On linux system there is a solution which is a lot better – run the component as a linux service.
Lets make is simple and easy to understand. Linux service is essentially a “process” which is driven by init script and has defined API – set of standard commands for management of the underlying linux process. Those linux service commands looks as following(processor represents actual name as defined in init script, see latter):

service processor start
service processor status
service processor stop
service processor restart

That’s a lot simpler, easy to manage and monitor, right? You don’t need to know where particular jar file is located etc. Examples of init scripts can be usually located /etc/init.d/samples or just simply read scripts in /etc/init.d which contains various init scripts for different kinds of linux services already present on the system.
For java applications there is a bunch of projects which acts as a service wrappers. That enables you to quickly and easily turn jar file to regular linux service as a program daemon. There are wrappers even for windows OS. For some reasons I was directed to use just linux server standard tools so the reminder of this post will be about making the linux service program daemon in a common way via shell scripts.
First of all there is a necessity to create startup and shutdown script with a need to properly manage pid (process id) file accordingly. A good practice is to have a dedicated user to run a particular linux services and have them installed under /usr/local/xxx .
startup script follows:

#!/bin/sh
#
# Script parameters: [Instalation_Foleder]
#
# JAVA_HOME Must point at your Java Development Kit installation.
# Required to run the with the "debug" argument.
#
# JRE_HOME Must point at your Java Runtime installation.
# Defaults to JAVA_HOME if empty. If JRE_HOME and JAVA_HOME
# are both set, JRE_HOME is used.
#
# JAVA_OPTS (Optional) Java runtime options used when any command
# is executed.

# Check the way the script has been called and set current directory as PROCESSOR_HOME
if [ "X$1" = "X" ]
then
  cd .. >/dev/null
  pwd >/dev/null
  PROCESSOR_HOME=$PWD
  SERVICE_INVOKE="no"
else 
  PROCESSOR_HOME=$1
  SERVICE_INVOKE="yes"
fi
echo PROCESSOR_HOME set to $PROCESSOR_HOME
# Load confing
source $PROCESSOR_HOME/bin/config.sh
# Check if the invocation is according to configuration [asService | asProcess]
if [ ! "$SERVICE_INVOKE" == "$RUN_AS_SERVICE" ]
then
  echo "ERROR - Invocation is not according to configuration - run as a Lunux Service= $RUN_AS_SERVICE"
  exit 6
fi
# check installation
if [ ! -d "$PROCESSOR_HOME/bin" \
-o ! -f "$PROCESSOR_HOME/bin/config.sh" \
-o ! -d "$PROCESSOR_HOME/conf" \
-o ! -d "$PROCESSOR_HOME/deploy" \
-o ! -d "$PROCESSOR_HOME/lib" \
-o ! -f "$PROCESSOR_HOME/conf/log4j.properties" \
-o ! -f "$PROCESSOR_HOME/deploy/test1-1.0-SNAPSHOT.jar" ]; 
then
echo 
echo ERROR - Installation is not correct!
echo Expected installation package looks:
echo "$PROCESSOR_HOME/bin"
echo "$PROCESSOR_HOME/bin/config.sh"
echo "$PROCESSOR_HOME/conf"
echo "$PROCESSOR_HOME/conf/log4j.properties"
echo "$PROCESSOR_HOME/deploy"
echo "$PROCESSOR_HOME/deploy/test1-1.0-SNAPSHOT.jar"
echo "$PROCESSOR_HOME/lib"
exit 1
fi
# clean up
CLASSPATH=
JAVA_OPTS=
JAVA_PATH=
JAVA_EXEC=

# set JAVA
REQUIRED_JVM_VERSION=1.7
if [ -z "$JAVA_HOME" ]; 
then
  if [ -z "$JRE_HOME" ];
    then
      echo ERROR - either JAVA_HOME or JRE_HOME is not set!!!
      exit 1
    else
    echo Java JRE used $JRE_HOME
    JAVA_PATH=$JRE_HOME
  fi
else
  echo Java used $JAVA_HOME
  JAVA_PATH=$JAVA_HOME 
fi

# set JAVA_EXEC
JAVA_EXEC=$JAVA_PATH/bin/java
#check Java bin
if [ ! -x "$JAVA_EXEC" ];
then
  echo Java binaries not found $JAVA_EXEC
  exit 1
fi
# checkJavaVersion
JVM_VERSION=$("$JAVA_EXEC" -version 2>&1 | awk -F '"' '/version/ {print $2}')
#echo version "$JVM_VERSION"
if [[ "$JVM_VERSION" < "$REQUIRED_JVM_VERSION" ]]; 
then
  echo ERROR - $JAVA_EXEC doesnt point to propper java version $REQUIRED_JVM_VERSION 
  exit 1
fi
# setBDHISTP_MAIN
BDHISTP_MAIN=cz.jaksky.PROCESSOR.PROCESSOR
# setClasspath
CLASSPATH=$PROCESSOR_HOME/deploy/*:$PROCESSOR_HOME/lib/*
# echo Classpath set to: $CLASSPATH

# setJAVA_OPTS
JAVA_OPTS=-Dbdconf=$PROCESSOR_HOME/conf
JAVA_OPTS="$JAVA_OPTS -Dlog4j.configuration=file:$PROCESSOR_HOME/conf/log4j.properties"
#echo JAVA_OPTS set to: $JAVA_OPTS
# This is nasty as in the code there is hardcoded location to actual config file for the process
cd $PROCESSOR_HOME
runProgram() {
echo $JAVA_EXEC $JAVA_OPTS -classpath $CLASSPATH $BDHISTP_MAIN
$JAVA_EXEC $JAVA_OPTS -classpath $CLASSPATH $BDHISTP_MAIN & PROCESS_PID=$!
echo $PROCESS_PID > $PIDDIR/$PID_FILENAME
echo "new application instance started as process $PROCESS_PID"
}

if [ ! -f "$PIDDIR/$PID_FILENAME" ]
then 
  echo "I will try to start new process ..."
  runProgram
else
  PID=$(cat $PIDDIR/$PID_FILENAME)
  if ps -p $PID >/dev/null
    then
      echo "WARNING $APP_NAME already running as process $PID"
    else
      echo "process $PID is not running - will try to start a new instance of the application"
      echo " "
      runProgram 
  fi
fi
exit 0
 

shutdown script follows:

#!/bin/sh
# Script usage:
# this script can be invoked either directly in bin folder or from different location with passing information where to locate installation folder
#
# Check the way the script has been called and set current directory as PROCESSOR_HOME
if [ "X$1" = "X" ]
then
  cd .. >/dev/null
  pwd >/dev/null
  PROCESSOR_HOME=$PWD
  SERVICE_INVOKE="no"
else 
  PROCESSOR_HOME=$1
  SERVICE_INVOKE="yes"
fi
echo PROCESSOR_HOME set to $PROCESSOR_HOME
# Load confing
source $PROCESSOR_HOME/bin/config.sh
if [ -z "$PIDDIR" ]
then
  echo "ERROR - Installation configuration file config.sh not found at $PROCESSOR_HOME/bin"
  exit 1
fi
# Load confing
source $PROCESSOR_HOME/bin/config.sh
# Check if the invocation is according to configuration [asService | asProcess]
if [ ! "$SERVICE_INVOKE" == "$RUN_AS_SERVICE" ]
then
  echo "ERROR - Invocation is not according to configuration - run as a Lunux Service= $RUN_AS_SERVICE"
  exit 6
fi
if [ -f "$PIDDIR/$PID_FILENAME" ]
then
  PID=$(cat $PIDDIR/$PID_FILENAME)
  kill $PID
  RC=$?
  rm $PIDDIR/$PID_FILENAME
  echo "Application $APP_NAME - process $PID shut down successfull"
  exit $RC
else
  echo "pid file not exist $PIDDIR/$PID_FILENAME, nothing to shut down"
  exit 0
fi

Those scripts relies on existence of installation configuration shell script – config.sh located in bin folder of installation as follows:

#!/bin/sh 
RUN_AS_SERVICE="yes"
APP_NAME="Processor"
APP_LONG_NAME="Processor instance" 
PIDDIR="/var/run/processor"
PID_FILENAME="processor.pid"

Startup script creates pid file located /var/run/processor – user under which the installation is running needs to have appropriate privileges.
Finally the init script which needs to be placed into /etc/init.d folder:

 ### BEGIN INIT INFO
# Provides: processor
# Required-Start: 
# Required-Stop: 
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: processor daemon
# Description: processor daemon
# This provides example about how to
# write a Init script.
### END INIT INFO
# Config to edit if needed
INSTALL_HOME=/usr/local/Processor
JAVA_HOME=/usr/java/default
SERVICE_USER="processor"
# No modification allowed from here
# Using the lsb functions to perform the operations.
. /lib/lsb/init-functions
#
# If the daemon is not there, then exit.
test -x $INSTALL_HOME/bin/startUp.sh || exit 5
test -x $INSTALL_HOME/bin/shutDown.sh || exit 5
test -x $INSTALL_HOME/bin/config.sh || exit 5
# Load confing
source $INSTALL_HOME/bin/config.sh
export JAVA_HOME
PIDFILE=$PIDDIR/$PID_FILENAME
# Process name ( For display )
NAME=$APP_NAME
CURRENT_USER=`id -nu`
start(){
  echo "Starting $NAME under $SERVICE_USER user..."
  if [ "$CURRENT_USER" == "$SERVICE_USER" ]
  then
    $INSTALL_HOME/bin/startUp.sh $INSTALL_HOME >/dev/null
    RC=$?
  else
    su --preserve-environment --command="$INSTALL_HOME/bin/startUp.sh $INSTALL_HOME >/dev/null" $SERVICE_USER
    RC=$?
  fi
}
stop(){
  echo "Stoping $NAME running under $SERVICE_USER user ..."
  if [ "$CURRENT_USER" == "$SERVICE_USER" ]
  then
    $INSTALL_HOME/bin/shutDown.sh $INSTALL_HOME >/dev/null
    RC=$?
  else
    su --preserve-environment --command="$INSTALL_HOME/bin/shutDown.sh $INSTALL_HOME >/dev/null" $SERVICE_USER
    RC=$?
  fi
}
case $1 in
start)
  start
  exit $RC
;;
stop)
  stop
  exit $RC
;;
restart)
  stop
  start
  exit $RC
;;
status)
  if [ ! -f "$PIDDIR/$PID_FILENAME" ]
  then 
    echo "$NAME is NOT RUNNING"
    exit 1
  else
    PID=$(cat $PIDDIR/$PID_FILENAME)
    if ps -p $PID >/dev/null
    then
      echo "$NAME is RUNNING $PID"
      exit 0
    else
      echo "$NAME is NOT RUNNING"
      exit 1
    fi
  fi
;;
*)
# For invalid arguments, print the usage message.
echo "Usage: $0 {start|stop|restart|status}"
exit 2
;;
esac

In the init script there is a need to change to appropriate java apps installation folder JAVA_HOME if not default and SERVICE_USER to user which is supposed to run this service. Service can be started under root account or SERVICE_USER without password specification or any other user with knowledge of credentials.

If you have a production like experience with java service wrappers mentioned at the beginning of the article don’t hesitate and share it! This way it serves the purpose at given situation.

How to search for jar file

I am pretty sure that every java developer were in the situation when he was searching for a java archive file having fully qualified class name.
Fully qualified name is enough info to get this kind of issue resolved. You can either take advantage of sites like http://www.findjar.com/ or features of IDE – search for class. Those approaches works well when missing class is from open source or at least from publicly available jar libraries. If the jar is already in your project but just missing item on the classpath – then the second case is applicable. But then there is vast amount of cases when you are searching for a library from vendor specific product which consists of huge amount of jar files. One way to find a class is to import all those libs to the IDE and then look up for a required class. This approach is a bit awkward. More straightforward approach is to search through product’s filesystem directly. One handy bash script follows – in this case searching for com.oracle.pitchfork.interfaces:

for i in 'find ./  -name "*.jar"'
do
result='$JAVA_HOME/bin/jar -tvf $i'
echo $result | grep -i com.oracle.pitchfork.interfaces &gt;dev/null
if[$? == 0]; then
echo $i;
fi
done

Run this bash from the product’s root folder – all jars containing required class will be listed.

Weblogic classloading

Getting a java.lang.NoSuchMethodError is usually the beginning of great exploration of your platform – in this case weblogic. Javadoc says:

Thrown if an application tries to call a specified method of a class (either static or instance), and that class no longer has a definition of that method.
Normally, this error is caught by the compiler; this error can only occur at run time if the definition of a class has incompatibly changed
.

What’s the hack going on here! Libraries used are embedded into the final archive I did verified that! If you don’t know simply suspect classloaders, publicly known enemies of java developers 🙂 As rule no.1 which says: “Verify your assumptions”. The fact that the class is in archive doesn’t necessary mean that it gets loaded, so to verify that simply pass -verbose or -verbose:class argument to weblogic’s JVM in startUp.sh/bin and you will get the origin of loaded classes.

Class loaded from WL_HOME/modules, how’s that possible? To understand that general understanding of classloading is essential and then understand your J2EE standard implementation e.g. Weblogic, JBoss, … This post is not going to pretend an expert detail knowledge level on this topic so I will rather stay with general principles with reference to details documentation.

Java has several class loaders (bootstrap, extension, …) the important fact is that they work in some hierarchy (parent-child relationship) with some delegation scheme which says when to load a class and from where. Java elementary delegation principle says: Delegate finding classes and resources to their parent before searching own classpath. Only if the parent cannot find it child is allowed to load it. So far so good. To complicate the matter a bit more – java servlet specification recommends look at child classloader before delegating to parent (if this recommendation were taken you need to check with documentation of J2EE implementation you are using, as you can see you know nothing based on those rules 🙂 ) So in my case of Weblogic J2EE implementation

as you can see system classloader is the parent of all the application’s classloaders, details can be found here. So how the class get loaded from WL_HOME/modules ? The framework library must be on system classpath. On the system classpath is just weblogic.jar not my framework library?
Weblogic 10 in order to better modularity included components under WL_HOME/modules and weblogic.jar now refers to these components in the modules directory from its manifest classpath. So that means that other version of library sits on system classloader – the parent of all the application classloaders, so that means that those libraries included in application archives will be ignores based on the delegation scheme. (That was probably the idea why was recommended in J2EE classloading delegation scheme – child first). However weblogic does offer other way how to solve this case by so called classloader filters/interceptors defined in weblogic specific deployment descriptor either on ear level or war level.
weblogic-application.xml

org.apache.log4j.*
antlr.*

weblogic.xml

      true

Java class version

Time to time it might happen that you need to know which version the class files were compiled for. Or to be more specific what target were specified while running javac compiler. As target specifies VM version the classes were generated for. This can be specified in maven as follows:

               <plugin>
                    <artifactId>maven-compiler-plugin</artifactId>
                    <configuration>
                         <target>1.6</target>
                    </configuration>
               </plugin>

It is not a rocket science, right. To find out the version the code were generated for we use javap (java class file disassembler). The following line do the trick:

javap -verbose -classpath versiontest-1.0.jar cz.test.string.StringPlaying

Compiled from "StringPlaying.java"
public class cz.test.string.StringPlaying extends java.lang.Object
  SourceFile: "StringPlaying.java"
  minor version: 0
  major version: 50
  Constant pool:
const #1 = Method       #12.#28;        //  java/lang/Object."<init>":()V
const #2 = String       #29;            //  beekeeper
const #3 = Method       #30.#31;        //  java/lang/String.substring:(II)Ljava/lang/String;

Major version matches java version based on following table


Table taken from Oracle blog