- Throughput
- Continuously call benchmark method until expiry time, count total over all worker threads Throughput
- Measures the number of operations per second, meaning the number of times per second your benchmark method could be executed.
- Note : Throughput and Average time output are exact opposite of each other i.e Throughput shows how many times a method can be called and Average time shows how much time we would need in an average to execute the method.
- Output will look like below

- AverageTime
- Continuously call benchmark method until expiry time, count average over all worker threads
- Measures the average time it takes for the benchmark method to execute (a single execution).
- Note : Throughput and Average time output are exact opposite of each other i.e Throughput shows how many times a method can be called and Average time shows how much time we would need in an average to execute the method.
- Output will look like below

- SampleTime
- Continuously call benchmark method until expiry time, automatically adjusts the sampling frequency, but may omit some pauses which missed the sampling measurement.
- Measures how long time it takes for the benchmark method to execute, including max, min time etc.
- Output will look like below

- SingleShotTime
- Calls only one time until iterations and timeout
- Measures how long time a single benchmark method execution takes to run. This is good to test how it performs under a cold start (no JVM warm up).
- Output will look like below

- All Modes
- Calls all the modes together
- Lesser the Score, faster is the method execution but do consider error as well
- Output will look like below

Below Code was used to get above outputs, not we need to do a maven clean and install after everychanges are made to the code :
package com.demo;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.util.concurrent.TimeUnit;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Timeout;
import org.openjdk.jmh.annotations.Warmup;
import org.openjdk.jmh.runner.Runner;
import org.openjdk.jmh.runner.RunnerException;
import org.openjdk.jmh.runner.options.Options;
import org.openjdk.jmh.runner.options.OptionsBuilder;
@BenchmarkMode(Mode.All) //Mode
@OutputTimeUnit(TimeUnit.MILLISECONDS) //Outputs should be displayed in which TimeUnit
@State(Scope.Benchmark)
@Fork(value = 2, jvmArgs = {"-Xms2G", "-Xmx2G"}) //Number of JVM instances, under which warmup and iterations should be executed
@Warmup(iterations = 3, time = 100, timeUnit = TimeUnit.MILLISECONDS) //Warmup parameters
@Measurement(iterations = 3, time = 100, timeUnit = TimeUnit.MILLISECONDS) //Iterations parameters
@Timeout(time = 10, timeUnit = TimeUnit.MINUTES) //Time in which all the executions should be completed
public class BenchmarkList {
public static void main(String[] args) throws RunnerException {
Options opt = new OptionsBuilder()
.include(BenchmarkList.class.getSimpleName())
.build();
new Runner(opt).run();
}
@State(Scope.Thread)
public static class MyState {
public int output = 10000;
List<String> arrayList = new ArrayList<>();
List<String> linkedList = new LinkedList<>();
}
@Benchmark
public void arrayListAdd(MyState state) throws InterruptedException {
for(int i=0; i<state.output; i++) {
state.arrayList.add("Hello"+i);
}
}
@Benchmark
public void linkListAdd(MyState state) throws InterruptedException {
for(int i=0; i<state.output; i++) {
state.linkedList.add("Hello"+i);
}
}
}