back to home

FireRedTeam / FireRed-OpenStoryline

FireRed-OpenStoryline is an AI video editing agent that transforms manual editing into intention-driven directing through natural language interaction, LLM-powered planning, and precise tool orchestration. It facilitates transparent, human-in-the-loop creation with reusable Style Skills for consistent, professional storytelling.

View on GitHub
1,204 stars
116 forks
8 issues
PythonJavaScriptHTML

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing FireRedTeam/FireRed-OpenStoryline in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/FireRedTeam/FireRed-OpenStoryline)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

🇨🇳 简体中文 | 🌏 English 🤗 HuggingFace Demo • 🌐 Homepage **FireRed-OpenStoryline** turns complex video creation into natural, intuitive conversations. Designed with both accessibility and enterprise-grade reliability in mind, FireRed-OpenStoryline makes video creation easy and friendly to beginners and creative enthusiasts alike. > Deriving from the saying "A single spark can start a prairie fire", the name FireRed represents our vision: to spread our SOTA capabilities—honed in real-world scenarios—like sparks across the wilderness, igniting the imagination of developers worldwide to reshape the future of AI together. ✨ Key Features • 🌐 **Smart Media Search & Organization**: Automatically searches online and downloads images and video clips that match your requirements. Performs clip segmentation and content understanding based on your thematic media. • ✍️ **Intelligent Script Generation**: Combines user themes, visual understanding, and emotion recognition to automatically construct storylines and context-aware narration. Features built-in Few-shot style transfer capabilities, allowing users to define specific copy styles (e.g., product reviews, casual vlogs) via reference text, achieving precise replication of tone, rhythm, and sentence structure. • 🎵 **Intelligent Music, Voiceover & Font Recommendations**: Supports personal playlist imports and auto-recommends BGM based on content and mood, featuring smart beat-syncing. Simply describe the desired tone—e.g., "Restrained," "Emotional," or "Documentary-style"—and the system matches suitable voiceovers and fonts to ensure a cohesive aesthetic. • 💬 **Conversational Refinement**: Rapidly cut, swap, or resequence clips. Edit scripts and fine-tune visual details—including color, font, stroke, and position. All edits are performed exclusively via natural language prompts with immediate results. • ⚡**Editing Skill Archiving**: Save your complete editing workflow as a custom Skill. Simply swap the media and apply the corresponding Skill to instantly replicate the style, enabling efficient batch creation. NEWS • 🚀 **2026-03-22**: Introduced an **ASR-based rough cut skill for speech videos**, enabling automatic removal of filler words, disfluencies, and repeated sentences, with timestamp-aligned segmentation for cleaner and more efficient speech editing workflows. • 🔥 **2026-03-12**: Integrated with **OpenClaw**, adding two OpenClaw Skills — and — covering the initial installation/first-run workflow and the actual usage workflow, respectively. Also added Skill usage instructions for **Claude Code**, making it easier for **Claude Code** to install and invoke the project in accordance with the repository guidelines. • **2026-02-10**: FireRed-OpenStoryline was officially open-sourced. 🏗️ Architecture ✨ Demo Zhongcao Style Humorous Style Product Picks Artistic Style Unboxing Talking Pet Travel Vlog Year-in-Review > > 🎨 Effects Note: Due to licensing restrictions on open-source assets, the elements (fonts/music) in the first row represent only basic effects. We highly recommend following the Custom Asset Library Tutorial to unlock commercial-grade fonts, music, and VFX for significantly better video quality. > ⚠️ Quality Note: To save space in the README, the demo videos are heavily compressed. The actual output retains the original resolution by default and supports custom dimensions. > In the Demo: The first row shows default open-source assets (Restricted Mode); the second row shows Xiaohongshu App "AI Clip" asset library effects. 👉 Click to view tutorial > ⚖️ Disclaimer: User footage and brand logos shown in the demos are for technical demonstration purposes only. Ownership belongs to the original creators. Please contact us for copyright concerns. > 🤖 Use via OpenClaw / Claude Code FireRed-OpenStoryline supports usage through Agent Skills. OpenClaw We provide two OpenClaw Skills: • : for installation, configuration, and first-run verification. • : for starting the service and running the actual video editing workflow. After installation, you only need to send your media source paths to OpenClaw, and it can help you complete the entire process from installing FireRed-OpenStoryline to generating the final video. Claude Code This repository includes built-in Claude Code Skills. If you launch Claude Code from **the repository root**, you can directly use the project-level Skills included in this repo, and Claude Code can help you install FireRed-OpenStoryline. If you want to install the Skill into your own global Claude Code configuration, run: 📦 Install • Clone repository • Create a virtual environment Install Conda according to the official guide (Miniforge is recommended, it is suggested to check the option to automatically configure environment variables during installation): https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html • 📦 Resource Download & Installation 3.1 Automatic Installation (Linux and macOS only) 3.2 Manual Installation A. MacOS or Linux • Step 1: Install wget (if not already installed) • Step 2: Download Resources • Step 3: Install Dependencies B. Windows • Step 1: Prepare Directory: Create a new directory named in the project root directory. • Step 2: Download and Extract: • Download Models (models.zip) -> Extract to the directory. • Download Resources (resource.zip) -> Extract to the directory. • Step 3: **Install Dependencies**: 🚀 Quick Start Note: Before starting, you need to configure the API-Key in config.toml first. For details, please refer to the documentation API-Key Configuration • Start the MCP Server MacOS or Linux Windows • Start the conversation interface • Method 1: Command Line Interface • Method 2: Web Interface 🐳 Docker Pull the Image Start the Container After starting, access the Web interface at http://0.0.0.0:7860 📁 Project Structure 📚 Documentation 📖 Tutorial Index • API Key Configuration - How to configure and manage API keys…