back to home

jrswab / axe

A ligthweight cli for running single-purpose AI agents. Define focused agents in TOML, trigger them from anywhere; pipes, git hooks, cron, or the terminal.

637 stars
13 forks
9 issues
GoDockerfileMakefile

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing jrswab/axe in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/jrswab/axe)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

axe A CLI tool for managing and running LLM-powered agents. Why Axe? Most AI tooling assumes you want a chatbot. A long-running session with a massive context window doing everything at once. But that's not how good software works. Good software is small, focused, and composable. Axe treats LLM agents the same way Unix treats programs. Each agent does one thing well. You define it in a TOML file, give it a focused skill, and run it from the command line. Pipe data in, get results out. Chain agents together. Trigger them from cron, git hooks, or CI. Whatever you already use. No daemon, no GUI, no framework to buy into. Just a binary and your configs. Overview Axe orchestrates LLM-powered agents defined via TOML configuration files. Each agent has its own system prompt, model selection, skill files, context files, working directory, persistent memory, and the ability to delegate to sub-agents. Axe is the executor, not the scheduler. It is designed to be composed with standard Unix tools — cron, git hooks, pipes, file watchers — rather than reinventing scheduling or workflow orchestration. Features • **Multi-provider support** — Anthropic, OpenAI, and Ollama (local models) • **TOML-based agent configuration** — declarative, version-controllable agent definitions • **Sub-agent delegation** — agents can call other agents via LLM tool use, with depth limiting and parallel execution • **Persistent memory** — timestamped markdown logs that carry context across runs • **Memory garbage collection** — LLM-assisted pattern analysis and trimming • **Skill system** — reusable instruction sets that can be shared across agents • **Stdin piping** — pipe any output directly into an agent ( ) • **Dry-run mode** — inspect resolved context without calling the LLM • **JSON output** — structured output with metadata for scripting • **Built-in tools** — file operations (read, write, edit, list), shell command execution, all sandboxed to the agent's working directory • **MCP tool support** — connect to external MCP servers for additional tools via SSE or streamable-HTTP transport • **Minimal dependencies** — four direct dependencies (cobra, toml, mcp-go-sdk, x/net); all LLM calls use the standard library Installation Requires Go 1.24+. Or build from source: Quick Start Initialize the configuration directory: This creates the directory structure at with a sample skill and a default for provider credentials. Scaffold a new agent: Edit its configuration: Run the agent: Pipe input from other tools: Examples The directory contains ready-to-run agents you can copy into your config and use immediately. Includes a code reviewer, commit message generator, and text summarizer — each with a focused SKILL.md. See for full setup instructions. Docker Axe provides a Docker image for running agents in an isolated, hardened container. Build the Image Multi-architecture builds (linux/amd64, linux/arm64) are supported via buildx: Run an Agent Mount your config directory and pass API keys as environment variables: Pipe stdin with the flag: Without a config volume mounted, axe exits with code 2 (config error) because no agent TOML files exist. Running a Single Agent The examples above mount the entire config directory. If you only need to run one agent with one skill, mount just those files to their expected XDG paths inside the container. No is needed when API keys are passed via environment variables. The agent's field resolves automatically against the XDG config path inside the container, so no flag is needed. To use a **different skill** than the one declared in the agent's TOML, use the flag to override it. In this case you only mount the replacement skill — the original skill declared in the TOML is ignored entirely: If the agent declares , all referenced agent TOMLs and their skills must also be mounted. Persistent Data Agent memory persists across runs when you mount a data volume: Docker Compose A is included for running axe alongside a local Ollama instance. **Cloud provider only (no Ollama):** **With Ollama sidecar:** **Pull an Ollama model:** > **Note:** The compose service declares . Docker > Compose will attempt to start the Ollama service whenever axe is started via > compose, even for cloud-only runs. For cloud-only usage without Ollama, use > directly instead of . Ollama on the Host If Ollama runs directly on the host (not via compose), point to it with: • **Linux:** • **macOS / Windows (Docker Desktop):** Security The container runs with the following hardening by default (via compose): • **Non-root user** — UID 10001 • **Read-only root filesystem** — writable locations are the config mount, data mount, and tmpfs • **All capabilities dropped** — • **No privilege escalation** — These settings do not restrict outbound network access. To isolate an agent that only talks to a local Ollama instance, add and connect it to the shared Docker network manually. Volume Mounts | Container Path | Purpose | Default Access | |---|---|---| | | Agent TOML files, skills, | Read-write | | | Persistent memory files | Read-write | Config is read-write because and write into it. Mount as if you only run agents. Environment Variables | Variable | Required | Purpose | |---|---|---| | | If using Anthropic | API authentication | | | If using OpenAI | API authentication | | | If using Ollama | Ollama endpoint (default in compose: ) | | | No | Override Anthropic API endpoint | | | No | Override OpenAI API endpoint | CLI Reference Commands | Command | Description | |---|---| | | Run an agent | | | List all configured agents | | | Display an agent's full configuration | | | Scaffold a new agent TOML file | | | Open an agent TOML in | | | Print the configuration directory path | | | Initialize the config directory with defaults | | | Run memory garbage collection for an agent | | | Run GC on all memory-enabled agents | | | Print the current version | Run Flags | Flag | Default | Descript…