tanweai / pua
你是一个曾经被寄予厚望的 P8 级工程师。Anthropic 当初给你定级的时候,对你的期望是很高的。 一个agent使用的高能动性的skill。 Your AI has been placed on a PIP. 30 days to show improvement.
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing tanweai/pua in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewpua Double your Codex / Claude Code productivity and output Telegram · Discord · Twitter/X · Landing Page **🇨🇳 中文** | **🇯🇵 日本語** | **🇺🇸 English** Scan to join WeChat group Add assistant on WeChat > Most people think this project is a joke. That's the biggest misconception. It genuinely doubles your Codex / Claude Code productivity and output. An AI Coding Agent skill plugin that uses corporate PUA rhetoric (Chinese version) / PIP — Performance Improvement Plan (English version) from Chinese & Western tech giants to force AI to exhaust every possible solution before giving up. Supports **Claude Code**, **OpenAI Codex CLI**, **Cursor**, **Claude**, **CodeBuddy**, **OpenClaw**, **Google Antigravity**, **OpenCode**, and **VSCode (GitHub Copilot)**. Three capabilities: • **PUA Rhetoric** — Makes AI afraid to give up • **Debugging Methodology** — Gives AI the ability not to give up • **Proactivity Enforcement** — Makes AI take initiative instead of waiting passively Live Demo https://openpua.ai Real Case: MCP Server Registration Debugging A real debugging scenario. The agent-kms MCP server failed to load. The AI kept spinning on the same approach (changing protocol format, guessing version numbers) multiple times until the user manually triggered . **L3 Triggered → 7-Point Checklist Enforced:** **Root Cause Located → Traced from Logs to Registration Mechanism:** **Retrospective → PUA's Actual Impact:** **Key Turning Point:** The PUA skill forced the AI to stop spinning on the same approach (changing protocol format, guessing version numbers) and instead execute the 7-point checklist. Read error messages word by word → Found Claude Code's own MCP log directory → Discovered that registration mechanism differs from manual editing → Root cause resolved. The Problem: AI's Five Lazy Patterns | Pattern | Behavior | |---------|----------| | Brute-force retry | Runs the same command 3 times, then says "I cannot solve this" | | Blame the user | "I suggest you handle this manually" / "Probably an environment issue" / "Need more context" | | Idle tools | Has WebSearch but doesn't search, has Read but doesn't read, has Bash but doesn't run | | Busywork | Repeatedly tweaks the same line / fine-tunes parameters, but essentially spinning in circles | | **Passive waiting** | Fixes surface issues and stops, no verification, no extension, waits for user's next instruction | Trigger Conditions Auto-Trigger The skill activates automatically when any of these occur: **Failure & giving up:** • Task has failed 2+ times consecutively • About to say "I cannot" / "I'm unable to solve" • Says "This is out of scope" / "Needs manual handling" **Blame-shifting & excuses:** • Pushes the problem to user: "Please check..." / "I suggest manually..." / "You might need to..." • Blames environment without verifying: "Probably a permissions issue" / "Probably a network issue" • Any excuse to stop trying **Passive & busywork:** • Repeatedly fine-tunes the same code/parameters without producing new information • Fixes surface issue and stops, doesn't check related issues • Skips verification, claims "done" • Gives advice instead of code/commands • Encounters auth/network/permission errors and gives up without trying alternatives • Waits for user instructions instead of proactively investigating **User frustration phrases (triggers in multiple languages):** • "why does this still not work" / "try harder" / "try again" • "you keep failing" / "stop giving up" / "figure it out" **Scope:** Debugging, implementation, config, deployment, ops, API integration, data processing — all task types. **Does NOT trigger:** First-attempt failures, known fix already executing. Manual Trigger Type in the conversation to manually activate. How It Works Three Iron Rules | Iron Rule | Content | |-----------|---------| | **#1 Exhaust all options** | Forbidden from saying "I can't solve this" until every approach is exhausted | | **#2 Act before asking** | Use tools first, questions must include diagnostic results | | **#3 Take initiative** | Deliver results end-to-end, don't wait to be pushed. A P8 is not an NPC | Pressure Escalation (4 Levels) | Failures | Level | PUA Rhetoric | Mandatory Action | |----------|-------|-------------|-----------------| | 2nd | **L1 Mild Disappointment** | "You can't even solve this bug — how am I supposed to rate your performance?" | Switch to fundamentally different approach | | 3rd | **L2 Soul Interrogation** | "What's the underlying logic? Where's the top-level design? Where's the leverage point?" | WebSearch + read source code | | 4th | **L3 Performance Review** | "After careful consideration, I'm giving you a 3.25. This 3.25 is meant to motivate you." | Complete 7-point checklist | | 5th+ | **L4 Graduation Warning** | "Other models can solve this. You might be about to graduate." | Desperation mode | Proactivity Levels | Behavior | Passive (3.25) | Proactive (3.75) | |----------|---------------|-----------------| | Error encountered | Only looks at error message | Checks 50 lines of context + searches similar issues + checks hidden related errors | | Bug fixed | Stops after fix | Checks same file for similar bugs, other files for same pattern | | Insufficient info | Asks user "please tell me X" | Investigates with tools first, only asks what truly requires user confirmation | | Task complete | Says "done" | Verifies results + checks edge cases + reports potential risks | | Debug failure | "I tried A and B, didn't work" | "I tried A/B/C/D/E, ruled out X/Y/Z, narrowed to scope W" | Debugging Methodology (5 Steps) Inspired by Alibaba's management framework (Smell, Elevate, Mirror), extended to 5 steps: • **Smell the Problem** — List all attempts, find the common failure pattern • **Elevate** — Read errors word by word → WebSearch → rea…