MilesCranmer / AirspeedVelocity.jl
Easily benchmark a Julia package over its commit history
View on GitHubAI Architecture Analysis
This repository is indexed by RepoMind. By analyzing MilesCranmer/AirspeedVelocity.jl in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewAirspeedVelocity.jl AirspeedVelocity.jl strives to make it easy to benchmark Julia packages over their lifetime. It is inspired by asv. This package allows you to: • Generate benchmarks directly from the terminal with an easy-to-use CLI. • Compare many commits/tags/branches at once. • Plot those benchmarks, automatically flattening your benchmark suite into a list of plots with generated titles. • Run in CI with a one‑line GitHub Action that comments benchmark results on every PR. This package also freezes the benchmark script at a particular revision, so there is no worry about the old history overwriting the benchmark. https://github.com/MilesCranmer/AirspeedVelocity.jl/assets/7593028/f27b04ef-8491-4f49-a312-4df0fae00598 • AirspeedVelocity.jl • Installation • Examples • Using in CI • Option 1: PR Comments • Option 2: Job Summary • Multiple Julia versions • CI Parameters • Further examples • CLI Reference • - • - Related packages Installation You can install the CLI with: This will install two executables at - make sure to have it on your . Examples You may use the CLI to generate benchmarks for any package with, e.g., This will benchmark the package defined in the current directory at the current dirty state, against the default branch (i.e., or ), over all benchmarks defined in using BenchmarkTools.jl. You should have a defined in this file, which you have added benchmarks to. This will then print a markdown table of the results while also saving the JSON results to the current directory. See the further examples for more details. Using in CI AirspeedVelocity.jl provides two ways to display benchmark results in GitHub Actions: Option 1: PR Comments Posts benchmark results as comments on pull requests. Add to your package: Option 2: Job Summary Displays benchmark results in the GitHub Actions job summary (visible in the Actions tab). Both workflows run AirspeedVelocity and display results with separate, collapsible tables for runtime and memory. Multiple Julia versions Each matrix leg writes its own comment (Option 1) or section in the job summary (Option 2). CI Parameters | Input | Default | What it does | |-------------------|------------------|---------------------------------------------| | | | AirspeedVelocity version to install | | | | Julia version to install | | | | Output to job summary instead of PR comment | | | | to tune benchmarks first | | | | Which tables to generate ( , ) | | | | Upload PNG plots as artifact | | | | Custom benchmark script path | | | | list for benchpkg (comma-separated) | | | | commit to freeze script | | | | list for | | | | for Julia runner | | | | extra packages (comma-separated) | | | | Force a time unit (excluding load time) | Further examples You can configure all options with the CLI flags. For example, to benchmark the registered package at the revisions , , and , you can use: This will further use the benchmark script as it was defined at , and then save the JSON results in the current directory. We can explicitly view the results of the benchmark as a table with : We can also generate plots of the revisions with: which will generate a pdf file for each set of 5 plots, showing the change with each revision: You can also provide a custom benchmark. For example, let's say you have a file , defining a benchmark for (we always need to define the variable as a ): Inside this script, we will also have access to the constant, to allow for different behavior depending on tag. We can run this benchmark over the history of with: where we have also specified the output directory and extra flags to pass to the executable. We can also now visualize this: CLI Reference For running benchmarks, you can use the command, which is built into the folder: You can also just generate a table from stored JSON results: For plotting, you can use the function: If you prefer to use the Julia API, you can use the function for generating data. The API is given here. Related packages Also be sure to check out PkgBenchmark.jl. PkgBenchmark.jl is a simple wrapper of BenchmarkTools.jl to interface it with Git, and is a good choice for building custom analysis workflows. However, for me this wrapper is a bit too thin, which is why I created this package. AirspeedVelocity.jl tries to have more features and workflows readily-available. It also emphasizes a CLI (though there is a Julia API), as my subjective view is that this is more suitable for interacting side-by-side with .