Just-In-Time (JIT) Compiler

1. Overview

The Just-In-Time (JIT) compiler, which is commonly referred as JIT Compiler, is a part of Java Runtime Environment (JRE). JIT improves the performance of Java applications at runtime. In computing JIT compilation, which is also known as “Dynamic Translations” is compilation done during execution of a program.

The traditional Java compiler compiles high-level Java source code (.java files) to bytecode readable by JVM (.class files), and then JVM interprets bytecode to machine instructions at runtime. JIT compiler helps to improve the performance of Java programs by compiling bytecodes into native machine code at runtime. JIT compiler is used surpass the performance of static compilation and maintain advantages of bytecode interpretation.

2. JIT Compilation

JIT compilation is also referred as “Hotspot Compilation”. Name “Hotspot” comes from the approach used by JVM to compile the code. One famous statement about program execution is “80% of execution time is spent in executing 20% of code”. This means that it is the 20% of code that is executed most frequently, other 80% is not used so often. A primary objective of optimization is to optimize this 20% code which is frequently used in a program as performance heavily depends on such section of code. These critical sections are known as Hotspot.

There are two types of JIT compilers used to optimize, one is used for a client, and another one is for a server. Programs, which are written to be executed on the server are more resource consuming and collect more accurate statistics. It is taking more time to find a best-optimized option. The server JIT compiler takes longer time to start as it tries to optimize the code on startup.  Client compiler takes less time to start as they do not do precompilation so aggressively at startup. Client JITs  are less effective in optimizations than the  server compiler.

JIT compiler can use more than one compilation thread to optimize code faster and reduce start up delay.

2.1 Impact on Application Start-up

JIT compilation requires memory and processor time to optimize methods. When JVM initialize millions of methods which are being used in an application are called, and it takes significant start-up time to compile and optimize it.

In practice, all called methods do not get compiled for the very first time. JVM maintains a call count to identify highly used methods. It increments counts each time method is called. JVM interprets a method until its count reaches a threshold for JIT compilation, once it reached that threshold, JIT compiler compile that method and make an optimization to improve the performance of an application. Threshold value should be set very carefully to maintain the balance between high performance and less start up time.

2.2 Phases of Compilation

Compilation of JIT compiler consists five phase life cycle to generate optimized machine code:

Inlining: Merge small methods to inline calls to speed up frequently executed method calls.

Local optimization: Analyze small portion of code and optimize it to improve method’s performance.

Control Flow Optimization: Analyze and rearrange control flow of method to execute it in better order.

Global Optimization: Analyze significant portion of code to optimize methods. This phase is expensive regarding memory and time usage, but it provides an excellent level of optimization.

Native Code Generation: Only Step in this cycle which is platform dependent. During this phase code is translated to machine code with some optimizations are performed based on platform characteristics.

There are several ways to carry out each of phases, and there are several sub-phases for them too.

2.3 Impact of Disabling JIT Compiler

JIT compiler is enabled by default, and it automatically gets activated when a Java method is called. It is not recommended to disable JIT compiler unless we need a workaround with some JIT compilation problems.

Disabling JIT compiler means the entire Java program will be interpreted and then it might be a lot worse in performance than before.

2.4 Example of JIT Compilation in JRockit JVM

Let’s look how JVM optimizes code to improve performance. For better understanding, the code is written as Java program. But JVM performs optimization on bytecode.

Code Before Optimization:

Class A {
    B b;
    public void test(){
        int x = b.retrieve();
        // more code
        int y = b.retrieve();
        int sum = x + y;
    }
}
Class B {
    int v;
    final int retrieve() {
        return v;
    }
}

Let’s look at the step by step process made by JIT compiler to optimize test() method:
1) Starting Point:

public void test(){
    int x = b.retrieve();
    // more code
    int y = b.retrieve();
    int sum = x + y;
}

2) Inline Final Method:
To reduce latencies replace b.retrieve() with its contents.

public void test(){
    int x = b.val;
    // more code
    int y = b.val;
    int sum = x + y;
}

3) Remove Redundant Calls:
We already called retrieve() once and assigned it to the local variable x. We can reduce latencies by using that local variable with y, instead of calling a function again.

public void test(){
    int x = b.val;
    // more code
    int y = x;
    int sum = x + y;
}

4) Copy Propagation:
we know that variables x and y contain same value so we can use one variable and can eliminate an extra variable.

public void test(){
    int x = b.val;
    // more code
    x = x;
    int sum = x + x;
}

5) Dead Code Elimination:
After performing other steps to optimize code, we will end up with some dead code which is no longer mean anything to our program. Those codes need to be removed to further optimize a program.

public void test(){
    int x = b.val;
    // more code
    int sum = x + y;
}

Now let’s see a code after performing optimization on it.

Code After Optimization:

Class A {
    B b;
    public void methodOne(){
        int x = b.retrieve();
        // more code
        int sum = x + x;
    }
}
Class B {
    int val;
    final int retrieve() {
        return val;
    }
}

3. Drawback of JIT Compilation

  • Limited ahead of time compilation consumes more memory and time at startup which is introducing Strat up delay for an application.
  • JIT added one more complex layer in program execution which developers don’t understand and sometimes are even unaware of this optimization layer. This fuzzy layer makes it harder to improve performance in some cases.

4. Summary

The simplest tool used to increase the performance of Java application is “Just-In-Time” compiler. It enhances the performance of an application significantly.
JIT compiler will improve performance but introduce some delay in startup and can consume more memory too. But all in all, JIT is a brilliant component in Java Virtual Machine which makes our code to run faster.

3 thoughts on “Just-In-Time (JIT) Compiler”

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.