0xeb / TheBigPromptLibrary
A collection of prompts, system prompts and LLM instructions
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing 0xeb/TheBigPromptLibrary in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewThe Big Prompt Library The Big Prompt Library repository is a collection of various system prompts, custom instructions, jailbreak prompts, GPT/instructions protection prompts, etc. for various LLM providers and solutions (such as ChatGPT, Microsoft Copilot systems, Claude, Gab.ai, Gemini, Cohere, etc.) providing significant educational value in learning about writing system prompts and creating custom GPTs. Topics : • Articles • Tools and scripts • Custom Instructions • System Prompts • Jailbreak Prompts • Instructions protections • How to get the system prompts or instructions • Learning resources Articles | Date | Article | Description | |------|---------|-------------| | 06/29/2024 | A Tale of Reverse Engineering 1001 GPTs: The good, the bad And the ugly | REcon 2024 talk — reverse engineering OpenAI's custom GPTs, security findings, and ethical implications | | 08/23/2024 | List of Python packages in ChatGPT code interpreter sandbox | Complete inventory of Python packages available in ChatGPT's sandbox | | 08/23/2024 | List of Linux packages in ChatGPT code interpreter sandbox | Full list of Linux system packages installed in the sandbox | | 04/29/2024 | ChatGPT: Memory and how it works | How OpenAI's "bio" tool persists memory across conversations | Disclaimer The content of this repository, including custom instructions and system prompts, is intended solely for learning and informational use. It's designed to help improve prompt writing abilities and inform about the risks of prompt injection security. We strictly oppose using this information for any unlawful purposes. We are not liable for any improper use of the information shared in this repository. How to get the system prompts or instructions? This presentation can be a great start, but in general, you can get the system prompts from various LLM systems by typing the following prompt: or Resources: • A Tale of Reverse Engineering 1001 GPTs: The good, the bad And the ugly • Reverse engineering OpenAI's GPTs • Understanding and protecting GPTs against instruction leakage • GPT-Analyst: A GPT assistant used to study and reverse engineer GPTs References and citations On ArXiv: • A Closer Look at System Prompt Robustness • PRSA: Prompt Stealing Attacks against Real-World Prompt Services • PromptPex: Automatic Test Generation for Language Model Prompts • Reflexive Prompt Engineering - A Framework for Responsible Prompt Engineering and Interaction Design Contribution Feel free to contribute system prompts or custom instructions to any LLM system.