Оновити README.md

This commit is contained in:
2025-12-22 02:38:27 +01:00
parent b0767c490e
commit 3c93d35bb4

View File

@@ -34,44 +34,66 @@ Unlike simple chatbots, MyStar features a "living" personality with long-term me
--- ---
### 🛠️ Tech Stack & Requirements ### 🛠️ Tech Stack
* **Python 3.10+** * **Python 3.10+**
* **Ollama:** Running locally (default LLM: `gemma3:12b`). * **Ollama:** Local LLM backend.
* **Image Generation Engine:** * **SD.Next:** Advanced Stable Diffusion WebUI.
* **[SD.Next](https://github.com/vladmandic/sdnext):** An advanced implementation of Stable Diffusion (by vladmandic), running with the `--api` flag. * **FFmpeg:** Audio processing for Whisper.
* **Checkpoint Model:** **[✨ JANKU Trained + NoobAI + RouWei Illustrious XL ✨](https://civitai.com/models/1277670?modelVersionId=2358314)**. This SDXL model is required for the intended visual style.
* **FFmpeg:** Required for Whisper voice processing.
### 📦 Installation ---
1. **Clone the repository:** ### 📦 Installation Guide
#### Step 1: Install & Configure Ollama
1. **Download Ollama:**
* **Standard:** Download from the [Official Website](https://ollama.com/download).
* **For unsupported AMD GPUs:** If you have an older or unsupported Radeon card, use this fork: [Ollama for AMD](https://github.com/likelovewant/ollama-for-amd).
2. **Pull the Model:**
Open your terminal and pull the model used in the config (default is `ministral-3:14b`):
```bash
ollama pull ministral-3:14b
```
#### Step 2: Install & Configure SD.Next
1. **Install SD.Next:**
Clone and install the repository from [vladmandic/sdnext](https://github.com/vladmandic/sdnext).
2. **Download the Checkpoint:**
Download the **[✨ JANKU Trained + NoobAI + RouWei Illustrious XL ✨](https://civitai.com/models/1277670?modelVersionId=2358314)** model.
3. **Setup:**
* Place the downloaded model into the `models/Stable-diffusion` folder inside your SD.Next directory.
* Run SD.Next with the API flag enabled:
```bash
./webui.sh --api --debug
# Or on Windows:
# webui.bat --api --debug
```
#### Step 3: Install The Bot
1. **Clone this repository:**
```bash ```bash
git clone https://git.maxo.one/Maxo/MyStar.git git clone https://git.maxo.one/Maxo/MyStar.git
cd MyStar cd MyStar
``` ```
2. **Install dependencies:** 2. **Install Python Dependencies:**
```bash ```bash
pip install python-telegram-bot python-telegram-bot[job-queue] openai-whisper requests pip install python-telegram-bot python-telegram-bot[job-queue] openai-whisper requests
``` ```
*(Note: You also need `ffmpeg` installed on your system path for Whisper to work).*
3. **Setup SD.Next:** 3. **Environment Setup:**
* Install [SD.Next](https://github.com/vladmandic/sdnext). Set your Telegram token as an environment variable (or edit the `TELEGRAM_BOT_TOKEN` line in the script directly, though not recommended for security).
* Download the **AAM XL** checkpoint and place it in the `models/Stable-diffusion` folder.
* Launch SD.Next with API enabled: `./webui.sh --api` (or `.bat` on Windows).
4. **Environment Setup:**
Set the following environment variable:
```bash ```bash
export TELEGRAM_BOT_TOKEN="your_telegram_bot_token" export TELEGRAM_BOT_TOKEN="your_telegram_bot_token"
# Optional:
# export OLLAMA_NUM_GPU=63
``` ```
5. **Run:** 4. **Run the Bot:**
```bash ```bash
python MyStarEN.py python MyStarENG.py
``` ```
### 📄 License ### 📄 License