OpenPipe / ART
Agent Reinforcement Trainer: train multi-step agents for real-world tasks using GRPO. Give your agents on-the-job training. Reinforcement learning for Qwen3.5, GPT-OSS, Llama, and more!
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing OpenPipe/ART in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewAgent Reinforcement Trainer Train multi-step agents for real-world tasks using GRPO. [![PRs-Welcome][contribute-image]][contribute-url] [ ][pypi-url] 🚀 W&B Training: Serverless RL **W&B Training (Serverless RL)** is the first publicly available service for flexibly training models with reinforcement learning. It manages your training and inference infrastructure automatically, letting you focus on defining your data, environment and reward function—leading to faster feedback cycles, lower costs, and far less DevOps. ✨ **Key Benefits:** • **40% lower cost** - Multiplexing on shared production-grade inference cluster • **28% faster training** - Scale to 2000+ concurrent requests across many GPUs • **Zero infra headaches** - Fully managed infrastructure that stays healthy • **Instant deployment** - Every checkpoint instantly available via W&B Inference 📖 Learn more about W&B Training → ART Overview ART is an open-source RL framework that improves agent reliability by allowing LLMs to **learn from experience**. ART provides an ergonomic harness for integrating GRPO into any python application. For a quick hands-on introduction, run one of the notebooks below. When you're ready to learn more, check out the docs. 📒 Notebooks | Agent Task | Example Notebook | Description | Comparative Performance | | ------------------- | -------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **ART•E [Serverless]** | 🏋️ Train agent | Qwen3 14B learns to search emails using RULER | benchmarks | | **2048 [Serverless]** | 🏋️ Train agent | Qwen3 14B learns to play 2048 | benchmarks | | **ART•E LangGraph** | 🏋️ Train agent | Qwen 2.5 7B learns to search emails using LangGraph | [Link coming soon] | | **MCP•RL** | 🏋️ Train agent | Qwen 2.5 3B masters the NWS MCP server | [Link coming soon] | | **Temporal Clue** | 🏋️ Train agent | Qwen 2.5 7B learns to solve Temporal Clue | [Link coming soon] | | **Tic Tac Toe** | 🏋️ Train agent | Qwen 2.5 3B learns to play Tic Tac Toe | benchmarks | | **Codenames** | 🏋️ Train agent | Qwen 2.5 3B learns to play Codenames | benchmarks | | **AutoRL [RULER]** | 🏋️ Train agent | Train Qwen 2.5 7B to master any task | [Link coming soon] | | **Distillation (SFT)** | 🏋️ Train model | Distill text-to-SQL from Qwen 3 235B to Qwen 3 30B | [Link coming soon] | | **Summarizer (SFT + RL)** | 🏋️ Train model | Train a document summarizer with SFT warmup then RL | [Link coming soon] | | **SFT from a dataset** | 🏋️ Train model | Fine-tune Qwen 3 30B on text-to-SQL from a dataset | [Link coming soon] | 📰 ART News Explore our latest research and updates on building SOTA agents. • 🗞️ **ART now integrates seamlessly with LangGraph** - Train your LangGraph agents with reinforcement learning for smarter multi-step reasoning and improved tool usage. • 🗞️ **MCP•RL: Teach Your Model to Master Any MCP Server** - Automatically train models to effectively use MCP server tools through reinforcement learning. • 🗞️ **AutoRL: Zero-Data Training for Any Task** - Train custom AI models without labeled data using automatic input generation and RULER evaluation. • 🗞️ **RULER: Easy Mode for RL Rewards** is now available for automatic reward generation in reinforcement learning. • 🗞️ **ART·E: How We Built an Email Research Agent That Beats o3** demonstrates a Qwen 2.5 14B email agent outperforming OpenAI's o3. • 🗞️ **ART Trainer: A New RL Trainer for Agents** enables easy training of LLM-based agents using GRPO. 📖 See all blog posts → Why ART? • ART provides convenient wrappers for introducing RL training into **existing applications**. We abstract the training server into a modular service that your code doesn't need to interface with. • **Train from anywhere.** Run the ART client on your laptop and let the ART server kick off an ephemeral GPU-enabled environment, or run on a local GPU. • Integrations with hosted platforms like W&B, Langfuse, and OpenPipe provide flexible observability and **simplify debugging**. • ART is customizable with **intelligent defaults**. You can configure training parameters and inference engine configurations to meet specific needs, or take advantage of the defaults, which have been optimized for training efficiency and stability. Installation ART agents can be trained from any client machine that runs python. To add to an existing project, run this command: 🤖 ART•E Agent Curious about how to use ART for a real-world task? Check out the ART•E Agent blog post, where we detail how we trained Qwen 2.5 14B to beat o3 at email retrieval! 🔁 Training Loop Overview ART's functionality is divided into a **client** and a **server**. The OpenAI-compatible client is responsible for interfacing between ART and your codebase. Using the client, you can pass messages and get completions from your LLM as it improves. The server runs independently on any machine with a GPU. It abstracts away the complexity of the inference and training portions of the RL loop while allowing for some custom configuration. An outline of the training loop is shown below: • **Inference** • Your code uses the ART client to perform an agentic workflow (usually executing several rollouts in parallel to gather data faster). • Completion requests are routed to the ART server, which runs the model's latest LoRA in vLLM. • As the agent executes, each , , and message is stored in a Trajectory. • When a rollout finishes, your code assigns a to its Trajectory, indicating the performance of the LLM. • **Training** • When each rollout has finished, Trajectories are grouped and sent to the server. Inference is blocked while training executes. • The…