back to home

sbt / sbt-jmh

"Trust no one, bench everything." - sbt plugin for JMH (Java Microbenchmark Harness)

798 stars
87 forks
29 issues
Scala

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing sbt/sbt-jmh in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/sbt/sbt-jmh)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

sbt-jmh ======= SBT plugin for running OpenJDK JMH benchmarks. JMH about itself: ----------------- JMH is a Java harness for building, running, and analysing nano/micro/milli/macro benchmarks written in Java and other languages targeting the JVM. Please read nanotrusting nanotime and other blog posts on micro-benchmarking (or why most benchmarks are wrong) and make sure your benchmark is valid, before you set out to implement your benchmarks. Versions -------- | Plugin version | Default JMH version | Notes | |--------------------------------------------------------------------------------------------|------------------------------------------------------------|:-------------------------------:| | (sbt 1.3.0/2.0) | | | | (sbt 1.3.0+) | | | | (sbt 1.3.0+) | | | | (sbt 1.3.0+) | | | | (sbt 1.3.0+) | | | | (sbt 1.3.0+) | | | | (sbt 1.3.0+) | | JMH supports 2.x | | (sbt 1.3.0+) | | | | (sbt 1.3.0+) | | profilers now in JMH core | | (sbt 0.13.17 / sbt 1.1.4) | | support JDK 11 | | (sbt 0.13.17 / sbt 1.1.4) | | support JDK 11 | | (sbt 0.13.17 / sbt 1.1.4) | | support of GraalVM | | (sbt 0.13.17 / sbt 1.1.1) | | JMH bugfix release | | (sbt 0.13.16 / sbt 1.0) | | minor bugfix release | | (sbt 0.13.16 / sbt 1.0) | | minor bugfix release | | (sbt 0.13.16 / sbt 1.0) | | async profiler, flame-graphs | | ... | ... | | Not interesting versions are skipped in the above listing. Always use the newest which has the JMH version you need. You should stick to the latest version at all times anyway of course. Adding to your project ---------------------- Since sbt-jmh is an **AutoPlugin** all you need to do in order to activate it in your project is to add the below line to your file: and enable it in the projects where you want to (useful in multi-project builds, as you can enable it only where you need it): If you define your project in a , you also need the following import: You can read more about auto plugins in sbt on it's documentation page. Write your benchmarks in . They will be picked up and instrumented by the plugin. JMH has a very specific way of working (it generates loads of code), so you should prepare a separate project for your benchmarks. In it, just type in order to run your benchmarks. All JMH options work as expected. For help type . Another example of running it is: Which means "3 iterations" "3 warmup iterations" "1 fork" "1 thread". Please note that benchmarks should be usually executed at least in 10 iterations (as a rule of thumb), but more is better. **For "real" results we recommend to at least warm up 10 to 20 iterations, and then measure 10 to 20 iterations again. Forking the JVM is required to avoid falling into specific optimisations (no JVM optimisation is really "completely" predictable)** If your benchmark should be a module in a multimodule project and needs access to another modules test classes then you might want to define your benchmarks in as well (because Intellij does not support "compile->test" dependencies). While this is not directly supported it can be achieved with some tweaks. Assuming the benchmarks live in a module and need access to test classes from , you have to define this dependency in your : Options ------- Please invoke to get a full list of run as well as output format options. **Useful hint**: If you plan to aggregate the collected data you should have a look at the available output formats ( ). For example it's possible to keep the benchmark's results as csv or json files for later regression analysis. Using Java Flight Recorder / async-profiler. ---------------------------- **NOTE**: -s integration with async-profiler and Java Flight Recorder has been contributed to the JMH project as of JMH 1.25 and removed from this project. Please migrate to using / . Use / to list available options. Examples -------- The examples are scala-fied examples from the original JMH repo, check them out, and run them! The results will look somewhat like this: Advanced: Using custom Runners ------------------------------ It is possible to hand over the running of JMH to an implemented by you, which allows you to programmatically access all test results and modify JMH arguments before you actually invoke it. To use a custom runner class with , simply use it: – an example for this is available in plugin/src/sbt-test/sbt-jmh/runMain (open the file). To replace the runner class which is used when you type , you can set the class in your build file – an example for this is available in plugin/src/sbt-test/sbt-jmh/custom-runner (open the file). Contributing ============ Yes, pull requests and opening issues is very welcome! The plugin is maintained at an best-effort basis -- submitting a PR is the best way of getting something done :-) You can locally publish the plugin with: Please test your changes by adding to the [scripted test suite][sbt-jmh/plugin/src/sbt-test/sbt-jmh/] which can be run with: Special thanks -------------- Special thanks for contributing async-profiler and flame-graphs support and other improvements go to @retronym of Lightbend's Scala team.