Huanshere / VideoLingo
Netflix-level subtitle cutting, translation, alignment, and even dubbing - one-click fully automated AI video subtitle team | Netflix级字幕切割、翻译、对齐、甚至加上配音,一键全自动视频搬运AI字幕组
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing Huanshere/VideoLingo in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewConnect the World, Frame by Frame **English**|**简体中文**|**繁體中文**|**日本語**|**Español**|**Русский**|**Français** 🌟 Overview (Try VL Now!) VideoLingo is an all-in-one video translation, localization, and dubbing tool aimed at generating Netflix-quality subtitles. It eliminates stiff machine translations and multi-line subtitles while adding high-quality dubbing, enabling global knowledge sharing across language barriers. Key features: • 🎥 YouTube video download via yt-dlp • **🎙️ Word-level and Low-illusion subtitle recognition with WhisperX** • **📝 NLP and AI-powered subtitle segmentation** • **📚 Custom + AI-generated terminology for coherent translation** • **🔄 3-step Translate-Reflect-Adaptation for cinematic quality** • **✅ Netflix-standard, Single-line subtitles Only** • **🗣️ Dubbing with GPT-SoVITS, Azure, OpenAI, and more** • 🚀 One-click startup and processing in Streamlit • 🌍 Multi-language support in Streamlit UI • 📝 Detailed logging with progress resumption Difference from similar projects: **Single-line subtitles only, superior translation quality, seamless dubbing experience** 🎥 Demo Dual Subtitles --- https://github.com/user-attachments/assets/a5c3d8d1-2b29-4ba9-b0d0-25896829d951 Cosy2 Voice Clone --- https://github.com/user-attachments/assets/e065fe4c-3694-477f-b4d6-316917df7c0a GPT-SoVITS with my voice --- https://github.com/user-attachments/assets/47d965b2-b4ab-4a0b-9d08-b49a7bf3508c Language Support **Input Language Support(more to come):** 🇺🇸 English 🤩 | 🇷🇺 Russian 😊 | 🇫🇷 French 🤩 | 🇩🇪 German 🤩 | 🇮🇹 Italian 🤩 | 🇪🇸 Spanish 🤩 | 🇯🇵 Japanese 😐 | 🇨🇳 Chinese* 😊 > *Chinese uses a separate punctuation-enhanced whisper model, for now... **Translation supports all languages, while dubbing language depends on the chosen TTS method.** Installation Meet any problem? Chat with our free online AI agent **here** to help you. > **Note:** For Windows users with NVIDIA GPU, follow these steps before installation: > 1. Install CUDA Toolkit 12.6 > 2. Install CUDNN 9.3.0 > 3. Add to your system PATH > 4. Restart your computer > **Note:** FFmpeg is required. Please install it via package managers: > - Windows: (via Chocolatey) > - macOS: (via Homebrew) > - Linux: (Debian/Ubuntu) • Clone the repository • Install dependencies(requires ) • Start the application Docker Alternatively, you can use Docker (requires CUDA 12.4 and NVIDIA Driver version >550), see Docker docs: APIs VideoLingo supports OpenAI-Like API format and various TTS interfaces: • LLM: , , , , ... (sorted by performance, be cautious with gemini-2.5-flash...) • WhisperX: Run whisperX (large-v3) locally or use 302.ai API • TTS: , , , ** **, , , (You can modify your own TTS in custom_tts.py!) > **Note:** VideoLingo works with **302.ai** - one API key for all services (LLM, WhisperX, TTS). Or run locally with Ollama and Edge-TTS for free, no API needed! For detailed installation, API configuration, and batch mode instructions, please refer to the documentation: English | 中文 Current Limitations • WhisperX transcription performance may be affected by video background noise, as it uses wav2vac model for alignment. For videos with loud background music, please enable Voice Separation Enhancement. Additionally, subtitles ending with numbers or special characters may be truncated early due to wav2vac's inability to map numeric characters (e.g., "1") to their spoken form ("one"). • Using weaker models can lead to errors during processes due to strict JSON format requirements for responses (tried my best to prompt llm😊). If this error occurs, please delete the folder and retry with a different LLM, otherwise repeated execution will read the previous erroneous response causing the same error. • The dubbing feature may not be 100% perfect due to differences in speech rates and intonation between languages, as well as the impact of the translation step. However, this project has implemented extensive engineering processing for speech rates to ensure the best possible dubbing results. • **Multilingual video transcription recognition will only retain the main language**. This is because whisperX uses a specialized model for a single language when forcibly aligning word-level subtitles, and will delete unrecognized languages. • **For now, cannot dub multiple characters separately**, as whisperX's speaker distinction capability is not sufficiently reliable. 📄 License This project is licensed under the Apache 2.0 License. Special thanks to the following open source projects for their contributions: whisperX, yt-dlp, json_repair, BELLE 📬 Contact Me • Submit Issues or Pull Requests on GitHub • DM me on Twitter: @Huanshere • Email me at: team@videolingo.io ⭐ Star History --- If you find VideoLingo helpful, please give me a ⭐️!