back to home

xming521 / WeClone

🚀 One-stop solution for creating your AI twin from chat history 💡 Fine-tune LLMs with your chat logs to capture your unique style, then bind to a chatbot to bring your digital self to life. 从聊天记录创造数字分身的一站式解决方案

16,426 stars
1,345 forks
37 issues
Python

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing xming521/WeClone in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/xming521/WeClone)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

🚀 One-stop solution for creating your digital avatar from chat history 💡 简体中文 | English | Project Homepage | Documentation > [!IMPORTANT] > ### Telegram is now supported as a data source ! ✨Core Features • 💫 Complete end-to-end solution for creating digital avatars, including chat data export, preprocessing, model training, and deployment • 💬 Fine-tune LLM using chat history with support for image modal data, infusing it with that authentic "flavor" • 🔗 Integrate with Telegram, WhatsApp (coming soon) to create your own digital avatar • 🛡️ Privacy information filtering with localized fine-tuning and deployment for secure and controllable data 📋Features & Notes Data Source Platform Support | Platform | Text | Images | Voice | Video | Animated Emojis/Stickers | Links (Sharing) | Quote | Forward | Location | Files | |----------|------|--------|-------|-------|-----------------|-----------------|-------|---------|----------|-------| | Telegram | ✅ | ✅ | ❌ | ❌ | ⚠️Convert to Emoji | ❌ | ❌ | ✅ | ✅ | ❌ | | WhatsApp | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | | Discord | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | | Slack | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | 🚧 | Deployment Platform Support | Platform | Deployment Support | |----------|--------------------| | Telegram | ✅ | | WhatsApp | 🚧 | | Discord | ✅ | | Slack | ✅ | > [!IMPORTANT] > - WeClone is still in rapid iteration phase, current performance does not represent final results. > - LLM fine-tuning effectiveness largely depends on model size, quantity and quality of chat data. Theoretically, larger models with more data yield better results. > - The performance of the 7B model is average, while models with 14B or more parameters tend to deliver better results. > - Windows environment has not been rigorously tested. You can use WSL as the runtime environment. Recent Updates [25/07/10] Data source added Telegram [25/06/05] Support for image modal data fine-tuning Online Fine-Tuning • Big Model Lab (Lab4AI) (with 50 CNY voucher): https://www.lab4ai.cn/project/detail?utm_source=weclone1&id=ab83d14684fa45d197f67eddb3d8316c&type=project Hardware Requirements The project uses Qwen2.5-VL-7B-Instruct model by default with LoRA method for SFT stage fine-tuning. You can also use other models and methods supported by LLaMA Factory. Estimated VRAM requirements: | Method | Precision | 7B | 14B | 30B | 70B | B | | ------------------------------- | --------- | ----- | ----- | ----- | ------ | ------- | | Full ( or ) | 32 | 120GB | 240GB | 600GB | 1200GB | GB | | Full ( ) | 16 | 60GB | 120GB | 300GB | 600GB | GB | | Freeze/LoRA/GaLore/APOLLO/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | GB | | QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | GB | | QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | GB | | QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | GB | Environment Setup • CUDA installation (skip if already installed, **requires version 12.6 or above**) • It is recommended to use uv to install dependencies, which is a very fast Python environment manager. After installing uv, you can use the following commands to create a new Python environment and install dependencies. • Copy the configuration file template and rename it to , and make subsequent configuration changes in this file: > [!NOTE] > Training and inference related configurations are unified in the file • Use the following command to test whether the CUDA environment is correctly configured and can be recognized by PyTorch (not needed for Mac): • (Optional) Install FlashAttention to accelerate training and inference: . Model Download It is recommended to use Hugging Face to download models, or use the following command: Data Preparation Please use Telegram Desktop to export chat records. Click the top right corner in the chat interface, then click "Export chat history". Select Photos for message types and JSON for format. You can export multiple contacts (group chat records are not recommended), then place the exported in the directory, meaning put different people's chat record folders together in . Data Preprocessing • First, modify the , , and in the configuration file according to your needs. • If you use telegram, you need to modify the in the configuration file to your own telegram user ID. • By default, the project uses Microsoft Presidio to remove from the data, but it cannot guarantee 100% identification. • Therefore, a blocklist is provided in , allowing users to manually add words or phrases they want to filter (the entire sentence containing blocked words will be removed by default). > [!IMPORTANT] > 🚨 Please be sure to protect personal privacy and do not leak personal information! • Execute the following command to process the data. You can modify the in settings.jsonc according to your own chat style. More Parameter Details: Data Preprocessing Configure Parameters and Fine-tune Model • (Optional) Modify , , in to select other locally downloaded models. • Modify and to adjust VRAM usage. • You can modify parameters like , , in based on your dataset's quantity and quality. Single GPU Training Multi-GPU Training Uncomment the line in and use the following command for multi-GPU training: Simple Inference with Browser Demo Test suitable temperature and top_p values, then modify in settings.jsonc for subsequent inference use. Inference Using API Test with Common Chat Questions Does not include questions asking for personal information, only daily conversation. Test results are in test_result-my.txt. 🖼️ Results Showcase > [!TIP] > **We're looking for interesting examples of native English speakers chatting with WeClone! Feel free to share them with us on Twitter.** 🤖 Deploy to Chat Bots AstrBot AstrBot is an easy-to-use multi-platform LLM chatbot and development framework ✨ Supports Discord, Telegram, Slack, Feishu and other platforms. Usage steps: • Deploy AstrBot • Deploy messaging platforms like Discord, Telegram, Slack in AstrBot • Execute to start t…