back to home

Arize-ai / phoenix

AI Observability & Evaluation

8,896 stars
764 forks
463 issues
Jupyter NotebookPythonTypeScript

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing Arize-ai/phoenix in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/Arize-ai/phoenix)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

Phoenix is an open-source AI observability platform designed for experimentation, evaluation, and troubleshooting. It provides: • **_Tracing_** - Trace your LLM application's runtime using OpenTelemetry-based instrumentation. • **_Evaluation_** - Leverage LLMs to benchmark your application's performance using response and retrieval evals. • **_Datasets_** - Create versioned datasets of examples for experimentation, evaluation, and fine-tuning. • **_Experiments_** - Track and evaluate changes to prompts, LLMs, and retrieval. • **_Playground_**- Optimize prompts, compare models, adjust parameters, and replay traced LLM calls. • **_Prompt Management_**- Manage and test prompt changes systematically using version control, tagging, and experimentation. Phoenix is vendor and language agnostic with out-of-the-box support for popular frameworks (OpenAI Agents SDK, Claude Agent SDK, LangGraph, Vercel AI SDK, Mastra, CrewAI, LlamaIndex, DSPy) and LLM providers (OpenAI, Anthropic, Google GenAI, Google ADK, AWS Bedrock, OpenRouter, LiteLLM, and more). For details on auto-instrumentation, check out the OpenInference project. Phoenix runs practically anywhere, including your local machine, a Jupyter notebook, a containerized deployment, or in the cloud. Installation Install Phoenix via or Phoenix container images are available via Docker Hub and can be deployed using Docker or Kubernetes. Arize AI also provides cloud instances at app.phoenix.arize.com. Packages The package includes the entire Phoenix platform. However, if you have deployed the Phoenix platform, there are lightweight Python sub-packages and TypeScript packages that can be used in conjunction with the platform. Python Subpackages | Package | Version & Docs | Description | | --------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | | arize-phoenix-otel | | Provides a lightweight wrapper around OpenTelemetry primitives with Phoenix-aware defaults | | arize-phoenix-client | | Lightweight client for interacting with the Phoenix server via its OpenAPI REST interface | | arize-phoenix-evals | | Tooling to evaluate LLM applications including RAG relevance, answer relevance, and more | TypeScript Subpackages | Package | Version & Docs | Description | | --------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | | @arizeai/phoenix-otel | | Provides a lightweight wrapper around OpenTelemetry primitives with Phoenix-aware defaults | | @arizeai/phoenix-client | | Client for the Arize Phoenix API | | @arizeai/phoenix-evals | | TypeScript evaluation library for LLM applications (alpha release) | | @arizeai/phoenix-mcp | | MCP server implementation for Arize Phoenix providing unified interface to Phoenix's capabilities | | @arizeai/phoenix-cli | | CLI for fetching traces, datasets, and experiments for use with Claude Code, Cursor, and other coding agents | Tracing Integrations Phoenix is built on top of OpenTelemetry and is vendor, language, and framework agnostic. For details about tracing integrations and example applications, see the OpenInference project. **Python Integrations** | | Integration | Package | Version | |:---:|---|---|---| | | OpenAI | | | | | OpenAI Agents | | | | | LlamaIndex | | | | | DSPy | | | | | AWS Bedrock | | | | | LangChain | | | | | MistralAI | | | | | Google GenAI | | | | | Google ADK | | | | | Guardrails | | | | | VertexAI | | | | | CrewAI | | | | | Haystack | | | | | [LiteLLM](http _...truncated for preview_