ptnghia-j / ChordMiniApp
Music Analysis, Chord Recognition, Beat Tracking, Guitar Diagrams, Piano Visualizer, Lyrics Transcription Application, context-aware LLM inference for analysis from uploaded audio and YouTube video
View on GitHubAI Architecture Analysis
This repository is indexed by RepoMind. By analyzing ptnghia-j/ChordMiniApp in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Overview (README excerpt)
Crawler viewChordMini Open-source music analysis tool for chord recognition, beat tracking, piano visualizer, guitar diagrams, and lyrics synchronization. Features Overview 🏠 Homepage Interface Clean, intuitive interface for YouTube search, URL input, and recent video access. 🎵 Beat & Chord Analysis Chord progression visualization with synchronized beat detection and grid layout with add-on features: Roman Numeral Analysis, Key Modulation Signals, Simplified Chord Notation, Enhanced Chord Correction, and **song segmentation overlays** for structural sections like intro, verse, chorus, bridge, and outro. 🎵 Guitar Diagrams Interactive guitar chord diagrams with **accurate fingering patterns** from the official @tombatossals/chords-db database, featuring multiple chord positions, synchronized beat grid integration, and exact slash-chord matching when the database includes a dedicated inversion shape. 🎹 Piano Visualizer Real-time piano roll visualization with falling MIDI notes synchronized to chord playback. Features a scrolling chord strip, interactive keyboard highlighting, smoother playback-synced rendering, segmentation-aware dynamics shaping, and **MIDI file export** for importing chord progressions into any DAW. 🎤 Lead Sheet with AI Assistant Synchronized lyrics transcription with AI chatbot for contextual music analysis and translation support. --- 🚀 Quick Setup Prerequisites • **Node.js 18+** and **npm** • **Python 3.9+** (for backend) • **Git LFS** (for SongFormer checkpoints) • **Firebase account** (free tier) • **Gemini API** (free tier) Setup Steps • **Clone and install** Clone with submodules in one command (for fresh clones) #### If you already cloned the repo before SongFormer was added downloads the large SongFormer model files referenced by this repo, including the checkpoint binaries stored as Git LFS objects. #### Verify that submodules are populated #### If chord recognition encounters issue with fluidsynth: Install FluidSynth for MIDI synthesis • **Environment setup** Edit : • **Start Python backend** (Terminal 1) • **Start frontend** (Terminal 2) • **Open application** Visit http://localhost:3000 --- 🐳 Docker Deployment (Recommended for Production) Prerequisites • Docker and Docker Compose installed (Get Docker) • Firebase account with API keys configured Quick Start • **Download configuration files** • **Configure environment** • **Start the application** • **Access the application** Visit http://localhost:3000 • **Stop the application** > **Note:** If you have Docker Compose V1 installed, use (with hyphen) instead of (with space). Docker Desktop GUI (Alternative) If you prefer using Docker Desktop GUI: • Open Docker Desktop • Go to "Images" tab and search for and • Pull both images • Use the "Containers" tab to manage running containers Required Environment Variables Edit with these required values: • - Firebase API key • - Firebase project ID • - Firebase storage bucket • - YouTube Data API v3 key • - Music.AI API key • - Google Gemini API key • - Genius API key See the API Keys Setup section below for detailed instructions on obtaining these keys. --- 📋 Detailed Setup Instructions Firebase Setup • **Create Firebase project** • Visit Firebase Console • Click "Create a project" • Follow the setup wizard • **Enable Firestore Database** • Go to "Firestore Database" in the sidebar • Click "Create database" • Choose "Start in test mode" for development • **Get Firebase configuration** • Go to Project Settings (gear icon) • Scroll down to "Your apps" • Click "Add app" → Web app • Copy the configuration values to your • **Create Firestore collections** The app uses the following Firestore collections. They are created automatically on first write (no manual creation required): • — Beat and chord analysis results (docId: ) • — Lyrics translation cache (docId: cacheKey based on content hash) • — Music.ai transcription results (docId: ) • — Musical key analysis cache (docId: cacheKey) • — Audio file metadata and URLs (docId: ) • — Async SongFormer segmentation jobs and persisted results (docId: ) • **Enable Anonymous Authentication** • In Firebase Console: Authentication → Sign-in method → enable Anonymous • **Configure Firebase Storage** • Set environment variable: • Folder structure: • for audio files • for optional video files • Filename pattern requirement: filenames must include the 11-character YouTube video ID in brackets, e.g. (enforced by Storage rules) • File size limits (enforced by Storage rules): • Audio: up to 50MB • Video: up to 100MB --- API Keys Setup Music.ai API (deprecated - MUSIC.ai no longer provide individual API key, only business plan) Google Gemini API --- 🏗️ Backend Architecture ChordMiniApp uses a **hybrid backend architecture**: 🔧 Local Development Backend (Required) For local development, you **must** run the Python backend on : • **URL**: • **Port Note**: Uses port 5001 to avoid conflict with macOS AirPlay/AirTunes service on port 5000 ☁️ Production Backend (your VPS) Production deployments is configured based on your VPS and url should be set in the environment variable. Prerequisites • **Python 3.9+** (Python 3.9-3.11 recommended) • **Virtual environment** (venv or conda) • **Git** for cloning dependencies • **System dependencies** (varies by OS) Quick Setup • **Navigate to backend directory** • **Create virtual environment** • **Install dependencies** In cases of conflict with spleeter, httpx, use --no-deps to skip installing dependencies of spleeter. • **Start local backend on port 5001** The backend will start on and should display: • **Verify backend is running** Open a new terminal and test the backend: • **Start frontend development server** The frontend will automatically connect to based on your configuration. Backend Features Available Locally • **Beat Detection**: Beat-Transformer and madmom models • **Chord Recognition**: Chord-CNN-LSTM, BTC-SL, BTC-PL models • **Aud…