Microbenchmarks

Setup

The benchmark tests are located in the src/jmh/java folder. The configuration for the benchmarks is located under src/jmh/ressources. If you want to create more benchmarks locate them here. To start the benchmark just use ./gradlew jmh.

Configuration

The configuration of the bechmark is managed through the build.gradle file. There you will find the following segment with a currently minimal set of configured properties.

Example configuration:

jmh {
    include = ['io.ox.benchmark.loadbalancer.test.LBLargeSetupTest']
    //include = ['io.ox.benchmark.loadbalancer.test.LBLargeSetupTest', 'io.ox.benchmark.loadbalancer.test.LBMediumSetupTest', 'io.ox.benchmark.loadbalancer.test.LBSmallSetupTest', 'io.ox.benchmark.proxyrule.test.PRLargeSetupTest', 'io.ox.benchmark.proxyrule.test.PRMediumSetupTest', 'io.ox.benchmark.proxyrule.test.PRSmallSetupTest']
    resultFormat = 'JSON'
    jmhVersion = '1.21'
    jvmArgs = ['-Djmh.separateClasspathJAR=true']
}

Global configuration

The properties set in build.gradle will overwrite all local properties which were used in your test class. So be careful if you want to run your local tests with a given configuration. If you want different global properties for your tests just add your tests to the include property. Now only your own written tests will run with that setup.

If you want to explicit exclude tests use the exclude property.

Properties

To configure the tests you can use the following properties:

include

include pattern (regular expression) for benchmarks to be executed. Pattern: [‘some regular expression’]

exclude

exclude pattern (regular expression) for benchmarks to be executed. Pattern: [‘some regular expression’]

iterations

Number of measurement iterations to do.

benchmarkMode

Benchmark mode. Available modes are: [Throughput/thrpt, AverageTime/avgt, SampleTime/sample, SingleShotTime/ss, All/all]

batchSize

Batch size: number of benchmark method calls per operation. (some benchmark modes can ignore this setting)

fork

How many times to fork a single benchmark. Use 0 to disable forking altogether

failOnError

Should JMH fail immediately if any benchmark had experienced the unrecoverable error? Value: true/false

forceGC

Should JMH force GC between iterations? Value: true/false

jvm

Custom JVM to use when forking. Pattern: ‘myjvm’

jvmArgs

Custom JVM args to use when forking.

jvmArgsAppend

Custom JVM args to use when forking. Pattern: [‘append these’]

jvmArgsPrepend

Custom JVM args to use when forking. Pattern: [‘prepend these’]

humanOutputFile

human-readable output file. Example: project.file(“${project.buildDir}/reports/jmh/human.txt”)

resultsFile

results file. Example: project.file(“${project.buildDir}/reports/jmh/results.txt”)

operationsPerInvocation

Operations per invocation.

benchmarkParameters

Benchmark parameters. Pattern: [:]

profilers

Use profilers to collect additional data. Supported profilers: [cl, comp, gc, stack, perf, perfnorm, perfasm, xperf, xperfasm, hs_cl, hs_comp, hs_gc, hs_rt, hs_thr]

timeOnIteration

Time to spend at each measurement iteration.

resultFormat

Result format type (one of CSV, JSON, NONE, SCSV, TEXT)

synchronizeIterations

Synchronize iterations? Value: true/false

threads

Number of worker threads to run with.

threadGroups

Override thread group distribution for asymmetric benchmarks.

timeout

Timeout for benchmark iteration.

timeUnit

Output time unit. Available time units are: [m, s, ms, us, ns].

verbosity

Verbosity mode. Available modes are: [SILENT, NORMAL, EXTRA]

warmup

Time to spend at each warmup iteration.

warmupBatchSize

Warmup batch size: number of benchmark method calls per operation.

warmupForks

How many warmup forks to make for a single benchmark. 0 to disable warmup forks.

warmupIterations

Number of warmup iterations to do.

warmupMode

Warmup mode for warming up selected benchmarks. Warmup modes are: [INDI, BULK, BULK_INDI].

warmupBenchmarks

Warmup benchmarks to include in the run in addition to already selected. JMH will not measure these benchmarks, but only use them for the warmup.

zip64

Use ZIP64 format for bigger archives. Value: true/false

jmhVersion

Specifies JMH version. Example: ‘1.21’

includeTests

Allows to include test sources into generate JMH jar, i.e. use it when benchmarks depend on the test classes. Value: true/false

duplicateClassesStrategy

Strategy to apply when encountring duplicate classes during creation of the fat jar (i.e. while executing jmhJar task). Example: ‘fail’

Local configuration

It is recommended to configure your tests with local config annotations. Just create an abstract class which holds the annotations and extend it in your test classes, e.g.:

@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.NANOSECONDS)
@State(Scope.Benchmark)
@Fork(value = 1, jvmArgs = {"-Xms2G", "-Xmx2G"})
@Warmup(iterations = 3, time = 500, timeUnit = TimeUnit.MILLISECONDS)
@Measurement(iterations = 5, time = 500, timeUnit = TimeUnit.MILLISECONDS)
public class AbstractBenchmarkConfig {

}

If you want a specific test in one class to run with individual configurations just annotate them as well:

@Benchmark
@Warmup(iterations = 1, time = 500, timeUnit = TimeUnit.MILLISECONDS)
@Measurement(iterations = 1, time = 500, timeUnit = TimeUnit.MILLISECONDS)
public void benchmarkStickySessionRouting(StickySessionParams data) {
     manager.chooseServer("APP1");
}

Note

Log output can lead to some higher test outcome than expected. If you want to remove the effect of logging to your microbenchmarks configure a higher log level in the logback.xml.