back to home

kserve / kserve

Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes

5,213 stars
1,407 forks
649 issues
GoPythonDockerfile

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing kserve/kserve in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/kserve/kserve)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

KServe KServe is a standardized distributed generative and predictive AI inference platform for scalable, multi-framework deployment on Kubernetes. KServe is being used by many organizations and is a Cloud Native Computing Foundation (CNCF) incubating project. For more details, visit the KServe website. Why KServe? Single platform that unifies Generative and Predictive AI inference on Kubernetes. Simple enough for quick deployments, yet powerful enough to handle enterprise-scale AI workloads with advanced features. Features **Generative AI** • 🧮 **Optimized Backends**: Support for vLLM and llm-d for optimized performance for serving LLMs • šŸ“Œ **Standardization**: OpenAI-compatible inference protocol for seamless integration with LLMs • šŸš… **GPU Acceleration**: High-performance serving with GPU support and optimized memory management for large models • šŸ’¾ **Model Caching**: Intelligent model caching to reduce loading times and improve response latency for frequently used models • šŸ—‚ļø **KV Cache Offloading**: Advanced memory management with KV cache offloading to CPU/disk for handling longer sequences efficiently • šŸ“ˆ **Autoscaling**: Request-based autoscaling capabilities optimized for generative workload patterns • šŸ”§ **Hugging Face Ready**: Native support for Hugging Face models with streamlined deployment workflows **Predictive AI** • 🧮 **Multi-Framework**: Support for TensorFlow, PyTorch, scikit-learn, XGBoost, ONNX, and more • šŸ”€ **Intelligent Routing**: Seamless request routing between predictor, transformer, and explainer components with automatic traffic management • šŸ”„ **Advanced Deployments**: Canary rollouts, inference pipelines, and ensembles with InferenceGraph • ⚔ **Autoscaling**: Request-based autoscaling with scale-to-zero for predictive workloads • šŸ” **Model Explainability**: Built-in support for model explanations and feature attribution to understand prediction reasoning • šŸ“Š **Advanced Monitoring**: Enables payload logging, outlier detection, adversarial detection, and drift detection • šŸ’° **Cost Efficient**: Scale-to-zero on expensive resources when not in use, reducing infrastructure costs Learn More To learn more about KServe, how to use various supported features, and how to participate in the KServe community, please follow the KServe website documentation. Additionally, we have compiled a list of presentations and demos to dive through various details. :hammer_and_wrench: Installation Standalone Installation • **Standard Kubernetes Installation**: Compared to Serverless Installation, this is a more **lightweight** installation. However, this option does not support canary deployment and request based autoscaling with scale-to-zero. • **Knative Installation**: KServe by default installs Knative for **serverless deployment** for InferenceService. • **ModelMesh Installation**: You can optionally install ModelMesh to enable **high-scale**, **high-density** and **frequently-changing model serving** use cases. • **Quick Installation**: Install KServe on your local machine. Kubeflow Installation KServe is an important addon component of Kubeflow, please learn more from the Kubeflow KServe documentation. Check out the following guides for running on AWS or on OpenShift Container Platform. :flight_departure: Create your first InferenceService :bulb: Roadmap :blue_book: InferenceService API Reference :toolbox: Developer Guide :writing_hand: Contributor Guide :handshake: Adopters Star History Contributors Thanks to all of our amazing contributors!