back to home

aeron-io / benchmarks

Latency benchmarks for messaging

126 stars
53 forks
0 issues
JavaShellC++

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing aeron-io/benchmarks in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/aeron-io/benchmarks)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

Benchmarks This project is a collection of the various benchmarks primarily targeting the Aeron project. The benchmarks can be divided into two major categories: • Messaging (remote) benchmarks. The core of the remote benchmarks is implemented by the class which is a benchmarking harness that is sending messages to the remote node(s) and timing the responses as they are received. During a test run the sends messages at the specified fixed rate with the specified payload size and the burst size. In the end it produces a latency histogram for an entire test run. The relies on the implementation of the abstract class which is responsible for sending and receiving messages to/from the remote node. *NB: These benchmarks are written in Java, but they can target systems in other languages provided there is a Java client for it.* • Other benchmarks. A collection of the benchmarks that run on a single machine (e.g. Agrona ring buffer, Aeron IPC, Aeorn C++ benchmarks, JDK queues etc.). Systems under test This section lists the systems under test which implement the remote benchmarks and the corresponding test scenarios. Aeron For Aeron the following test scenarios were implemented: • Echo benchmark. An Aeron Transport benchmark which consist of a client process that sends messages over UDP via an exclusive publication using either zero-copy or API. Configuration option controls which API is used. If no value is specified then will be used. And the server process which echoes the complete (re-assembled) received messages back using API. • Live replay from a remote Archive. The client publishes messages to the server using publication over UDP. The server pipes those messages into a local IPC publication which records them into an Archive. Finally, the client subscribes to the replay from that Archive over UDP and receives persisted messages. • Live recording to a local Archive. The client publishes messages over UDP to the server. It also has a recording running on that publication using local Archive. The server simply pipes message back. Finally, the client performs a controlled poll on the subscription from the server limited by the "recording progress" which it gets via the recording events. The biggest difference between scenario 2 and this scenario is that there is no replay of recorded messages and hence no reading from disc while still allowing consumption of only those messages that were successfully persisted. • Cluster benchmark. The client sends messages to the Aeron Cluster over UDP. The Cluster sequences the messages into a log, reaches the consensus on the received messages, processes them and then replies to the client over UDP. • Aeron Echo MDC benchmark. An extension to Aeron Echo benchmark which uses an MDC (or a multicast) channel to send the same data to multiple receivers. Only one receiver at a time will respond to a given incoming message ensuring that the number of replies matches the number of messages sent. • Aeron Archive Replay MDC benchmark. Aeron Archive benchmark that multiple replays. The benchmark consists of at least three nodes: • the client node sending the data • the Archive node recording the data stream to disc • the replay nodes requesting replay of the recording from the Archive Similar to the Aeron Echo MDC benchmark only one replay node at a time will send a response message back to the client node thus ensuring that the number of messages sent and the number of replays match. Please the documentation in the scripts/aeron directory for more information. gRPC For gRPC there is only echo benchmark with a single implementation: • Streaming client - client uses streaming API to send and receive messages. Please read the documentation in the scripts/grpc directory for more information. Kafka Unlike the gRPC that simply echoes messages the Kafka will persist them so the benchmark is similar to the Aeron's replay from a remote Archive. Please read the documentation in the directory for more information. Remote benchmarks (multiple machines) The directory contains scripts to run the _remote benchmarks_, i.e. the benchmarks that involve multiple machines where one is the _client_ (the benchmarking harness) and the rest are the _server nodes_. The class implements the benchmarking harness. Whereas the class provides the configuration for the benchmarking harness. Before the benchmarks can be executed they have to be built. This can be done by running the following command in the root directory of this project: Once complete it will create a file that should be deployed to the remote machines. Running benchmarks via SSH (i.e. automated way) The easiest way to run the benchmarks is by using the wrapper scripts which invoke scripts remotely using the SSH protocol. When the script finishes its execution it will download an archive with the results (histograms). The following steps are required to run the benchmarks: • Build the tar file (see above). • Copy tar file to the destination machines and unpack it, i.e. . • On the local machine create a wrapper script that sets all the necessary configuration parameters for the target benchmark. See example below. • Run the wrapper script from step 3. • Once the execution is finished an archive file with the results will be downloaded to the local machine. By default, it will be placed under the directory in the project folder. Here is an example of a wrapper script for the Aeron echo benchmarks. _NB: All the values in angle brackets ( ) will have to be replaced with the actual values._ Running benchmarks manually (single shot execution) The following steps are required to run the benchmarks: • Build the tar file (see above). • Copy tar file to the destination machines and unpack it, i.e. . • Follow the documentation for a particular benchmark to know which scripts to run and in which order. • Run the script specifying the _benchmark client script_ to execute.…