ack00gar / FunGen-AI-Powered-Funscript-Generator
FunGen can be augmented with a Device Controller and a Streamer to connect to XBVR, Stash, local files. See the discord and ko-fi links for more :)
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing ack00gar/FunGen-AI-Powered-Funscript-Generator in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewFunGen FunGen is a Python-based tool that uses AI to generate Funscript files from VR and 2D POV videos. It enables fully automated funscript creation for individual scenes or entire folders of videos. Join the **Discord community** for discussions and support: Discord Community --- DISCLAIMER This project is still at the early stages of development. It is not intended for commercial use. Please, do not use this project for any commercial purposes without prior consent from the author. It is for individual use only. --- v0.7.5 Highlights • **VR Hybrid Chapter-Aware Tracker** — New offline tracker combining sparse YOLO chapter detection with per-chapter ROI optical flow. Single-pass video decode, hardware-accelerated encoding via FFmpegEncoder, and automatic reuse of preprocessed video on re-run • **Preprocessed Video Infrastructure** — VR Hybrid tracker uses the same FFmpegEncoder (hw accel priority chain) as standard Stage 1. Standard preprocessed path ( ), automatic reuse, and cleanup via the setting • **Batch Mode: Save Preprocessed Video Option** — New opt-in setting (default off) in both GUI batch dialog and CLI ( ). Prevents accidental disk bloat across batch runs while allowing users who want faster re-runs to keep the files • **Finer VR Hybrid Progress Reporting** — Per-frame progress during chapter analysis (Pass 2), CLI stdout progress bar for headless/batch runs v0.6.0 Highlights • **GUI Modernization** — Sidebar navigation, section cards, workflow breadcrumb, pinned action bar, status strip • **Multi-Axis Funscript Support** — OFS-compatible axis system (stroke, roll, pitch, surge, sway, twist); per-timeline axis assignment, configurable default secondary axis (e.g. twist for SSR2), / / export • **O(1) Funscript Performance** — Chronological-append fast-path eliminates O(n) bisect during live tracking; undo/redo cache fix, vectorization • **14+ Built-in Filter Plugins** — Ultimate Autotune, RDP Simplify, Savitzky-Golay, Speed Limiter, Anti-Jerk, Amplify, Dynamic Amplify, Clamp, Invert, Keyframe, Resample, Time Shift, and more • **Patreon Supporter Features** — Batch processing, live capture, experimental tracker early access (distributed via Discord bot) • **Device Control & VR Streaming Add-ons** — OSR/Buttplug hardware control, HereSphere/Quest 3 streaming (available on Ko-fi) • **Automatic DPI Scaling** — System display scaling detection for Windows, macOS, and Linux --- Quick Installation (Recommended) **Automatic installer that handles everything for you:** Windows • Download: install.bat • Double-click to run (or run from command prompt) • Wait for automatic installation of Python, Git, FFmpeg, and FunGen Linux/macOS The installer automatically: • Installs Python 3.11 (Miniconda) • Installs Git and FFmpeg/FFprobe • Downloads and sets up FunGen AI • Installs all required dependencies • Creates launcher scripts for easy startup • Detects your GPU and optimizes PyTorch installation **That's it!** The installer creates launch scripts - just run them to start FunGen. --- Manual Installation If you prefer manual installation or need custom configuration: Prerequisites Before using this project, ensure you have the following installed: • **Git** https://git-scm.com/downloads/ or 'winget install --id Git.Git -e --source winget' from a command prompt for Windows users as described below for easy install of Miniconda. • **FFmpeg** added to your PATH or specified under the settings menu (https://www.ffmpeg.org/download.html) • **Miniconda** (https://www.anaconda.com/docs/getting-started/miniconda/install) Easy install of Miniconda for Windows users: Open Command Prompt and run: Start a miniconda command prompt After installing Miniconda look for a program called "Anaconda prompt (miniconda3)" in the start menu (on Windows) and open it Create the necessary miniconda environment and activate it • Please note that any pip or python commands related to this project must be run from within the VRFunAIGen virtual environment. Clone the repository Open a command prompt and navigate to the folder where you'd like FunGen to be located. For example, if you want it in C:\FunGen, navigate to C:\ ('cd C:\'). Then run Install the core python requirements NVIDIA GPU Setup (CUDA Required) **Quick Setup:** • **Install NVIDIA Drivers**: Download here • **Install CUDA 12.8**: Download here • **Install cuDNN for CUDA 12.8**: Download here (requires free NVIDIA account) **Install Python Packages:** **For 20xx, 30xx and 40xx-series NVIDIA GPUs:** **For 50xx series NVIDIA GPUs (RTX 5070, 5080, 5090):** **Note:** NVIDIA 10xx series GPUs are not supported. **Verify Installation:** If your GPU doesn't support cuda AMD GPU acceleration (ROCm for Linux Only) ROCm is supported for AMD GPUs on Linux. To install the required packages, run: Download the YOLO models The necessary YOLO models will be automatically downloaded on the first startup. If you want to use a specific model, you can download it from our Discord and place it in the sub-directory. If you aren't sure, you can add all the models and let the app decide the best option for you. Start the app We support multiple model formats across Windows, macOS, and Linux. Recommendations • NVIDIA Cards: we recommend the .engine model • AMD Cards: we recommend .pt (requires ROCm see below) • Mac: we recommend .mlmodel Models • **.pt (PyTorch)**: Requires CUDA (for NVIDIA GPUs) or ROCm (for AMD GPUs) for acceleration. • **.onnx (ONNX Runtime)**: Best for CPU users as it offers broad compatibility and efficiency. • **.engine (TensorRT)**: For NVIDIA GPUs: Provides very significant efficiency improvements (this file needs to be build by running "Generate TensorRT.bat" after adding the base ".pt" model to the models directory) • **.mlpackage (Core ML)**: Optimized for macOS users. Runs efficiently on Apple devices with Core ML. In most cases, the app will automatically detect the best model from your models directory at lau…