OWASP / AISVS
The AI Security Verification Standard (AISVS) focuses on providing developers, architects, and security professionals with a structured checklist to verify the security of AI-driven applications.
View on GitHubAI Architecture Analysis
This repository is indexed by RepoMind. By analyzing OWASP/AISVS in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewOWASP Artificial Intelligence Security Verification Standard (AISVS) [![CC BY-SA 4.0][cc-by-sa-shield]][cc-by-sa] This work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License][cc-by-sa]. [![CC BY-SA 4.0][cc-by-sa-image]][cc-by-sa] [cc-by-sa]: http://creativecommons.org/licenses/by-sa/4.0/ [cc-by-sa-image]: https://licensebuttons.net/l/by-sa/4.0/88x31.png [cc-by-sa-shield]: https://img.shields.io/badge/License-CC%20BY--SA%204.0-blue.svg What is AISVS? The **Artificial Intelligence Security Verification Standard (AISVS)** is a community-driven catalogue of testable security requirements for AI-enabled systems. It gives developers, architects, security engineers, and auditors a structured framework to design, build, test, and verify the security of AI applications throughout their lifecycle, from data collection and model training to deployment, monitoring, and retirement. AISVS is modeled after the OWASP Application Security Verification Standard (ASVS) and follows the same philosophy: every requirement should be **verifiable, testable, and implementable**. What AISVS is NOT • **Not a governance framework.** Governance is well-covered by NIST AI RMF, ISO/IEC 42001, and EU AI Act compliance guides. • **Not a risk management framework.** AISVS provides the technical controls that risk frameworks point to, but does not define risk assessment methodology. • **Not a tool recommendation list.** AISVS is vendor-neutral and does not endorse specific products or frameworks. How AISVS complements other standards | Standard | Focus | AISVS relationship | |---|---|---| | OWASP ASVS | Web application security | AISVS extends ASVS concepts to AI-specific threats | | OWASP Top 10 for LLMs | Awareness of top LLM risks | AISVS provides the detailed controls to mitigate those risks | | NIST AI RMF | AI risk governance | AISVS supplies the testable technical controls that AI RMF references | | ISO/IEC 42001 | AI management systems | AISVS complements with implementation-level security verification | Verification Levels Each AISVS requirement is assigned a verification level (1, 2, or 3) indicating the depth of security assurance: | Level | Description | When to use | |:---:|---|---| | **1** | Essential baseline controls that every AI system should implement. | All AI applications, including internal tools and low-risk systems. | | **2** | Standard controls for systems handling sensitive data or making consequential decisions. | Production systems, customer-facing AI, systems processing personal data. | | **3** | Advanced controls for high-assurance environments requiring defense against sophisticated attacks. | Critical infrastructure, safety-critical AI, high-value targets, regulated industries. | Organizations should select a target level based on the risk profile of their AI system. Most production systems should aim for at least Level 2. How to use AISVS • **During design.** Use requirements as a security checklist when architecting AI systems. • **During development.** Integrate requirements into CI/CD pipelines, code reviews, and testing. • **During security assessments.** Use as a verification framework for penetration testing and audits. • **For procurement.** Reference specific requirements when evaluating AI vendors and third-party models. Requirement Chapters • Training Data Integrity & Traceability • User Input Validation • Model Lifecycle Management & Change Control • Infrastructure, Configuration & Deployment Security • Access Control & Identity • Supply Chain Security for Models, Frameworks & Data • Model Behavior, Output Control & Safety Assurance • Memory, Embeddings & Vector Database Security • Autonomous Orchestration & Agentic Action Security • Model Context Protocol (MCP) Security • Adversarial Robustness & Attack Resistance • Privacy Protection & Personal Data Management • Monitoring, Logging & Anomaly Detection • Human Oversight and Trust Appendices • Appendix A: Glossary • Appendix B: References • Appendix C: AI-Assisted Secure Coding • Appendix D: AI Security Controls Inventory Contributing We welcome contributions from the community. Please open an issue to report bugs or suggest improvements. We may ask you to submit a pull request based on the discussion. Project Leaders This project was founded by Jim Manico. Current project leadership includes Jim Manico, Otto Sulin, and Russ Memisyazici. License The entire project content is under the **Creative Commons Attribution-Share Alike v4.0** license.