Оновити README.md
This commit is contained in:
62
README.md
62
README.md
@@ -34,44 +34,66 @@ Unlike simple chatbots, MyStar features a "living" personality with long-term me
|
||||
|
||||
---
|
||||
|
||||
### 🛠️ Tech Stack & Requirements
|
||||
### 🛠️ Tech Stack
|
||||
|
||||
* **Python 3.10+**
|
||||
* **Ollama:** Running locally (default LLM: `gemma3:12b`).
|
||||
* **Image Generation Engine:**
|
||||
* **[SD.Next](https://github.com/vladmandic/sdnext):** An advanced implementation of Stable Diffusion (by vladmandic), running with the `--api` flag.
|
||||
* **Checkpoint Model:** **[✨ JANKU Trained + NoobAI + RouWei Illustrious XL ✨](https://civitai.com/models/1277670?modelVersionId=2358314)**. This SDXL model is required for the intended visual style.
|
||||
* **FFmpeg:** Required for Whisper voice processing.
|
||||
* **Ollama:** Local LLM backend.
|
||||
* **SD.Next:** Advanced Stable Diffusion WebUI.
|
||||
* **FFmpeg:** Audio processing for Whisper.
|
||||
|
||||
### 📦 Installation
|
||||
---
|
||||
|
||||
1. **Clone the repository:**
|
||||
### 📦 Installation Guide
|
||||
|
||||
#### Step 1: Install & Configure Ollama
|
||||
|
||||
1. **Download Ollama:**
|
||||
* **Standard:** Download from the [Official Website](https://ollama.com/download).
|
||||
* **For unsupported AMD GPUs:** If you have an older or unsupported Radeon card, use this fork: [Ollama for AMD](https://github.com/likelovewant/ollama-for-amd).
|
||||
2. **Pull the Model:**
|
||||
Open your terminal and pull the model used in the config (default is `ministral-3:14b`):
|
||||
```bash
|
||||
ollama pull ministral-3:14b
|
||||
```
|
||||
|
||||
#### Step 2: Install & Configure SD.Next
|
||||
|
||||
1. **Install SD.Next:**
|
||||
Clone and install the repository from [vladmandic/sdnext](https://github.com/vladmandic/sdnext).
|
||||
2. **Download the Checkpoint:**
|
||||
Download the **[✨ JANKU Trained + NoobAI + RouWei Illustrious XL ✨](https://civitai.com/models/1277670?modelVersionId=2358314)** model.
|
||||
3. **Setup:**
|
||||
* Place the downloaded model into the `models/Stable-diffusion` folder inside your SD.Next directory.
|
||||
* Run SD.Next with the API flag enabled:
|
||||
```bash
|
||||
./webui.sh --api --debug
|
||||
# Or on Windows:
|
||||
# webui.bat --api --debug
|
||||
```
|
||||
|
||||
#### Step 3: Install The Bot
|
||||
|
||||
1. **Clone this repository:**
|
||||
```bash
|
||||
git clone https://git.maxo.one/Maxo/MyStar.git
|
||||
cd MyStar
|
||||
```
|
||||
|
||||
2. **Install dependencies:**
|
||||
2. **Install Python Dependencies:**
|
||||
```bash
|
||||
pip install python-telegram-bot python-telegram-bot[job-queue] openai-whisper requests
|
||||
```
|
||||
*(Note: You also need `ffmpeg` installed on your system path for Whisper to work).*
|
||||
|
||||
3. **Setup SD.Next:**
|
||||
* Install [SD.Next](https://github.com/vladmandic/sdnext).
|
||||
* Download the **AAM XL** checkpoint and place it in the `models/Stable-diffusion` folder.
|
||||
* Launch SD.Next with API enabled: `./webui.sh --api` (or `.bat` on Windows).
|
||||
|
||||
4. **Environment Setup:**
|
||||
Set the following environment variable:
|
||||
3. **Environment Setup:**
|
||||
Set your Telegram token as an environment variable (or edit the `TELEGRAM_BOT_TOKEN` line in the script directly, though not recommended for security).
|
||||
```bash
|
||||
export TELEGRAM_BOT_TOKEN="your_telegram_bot_token"
|
||||
# Optional:
|
||||
# export OLLAMA_NUM_GPU=63
|
||||
```
|
||||
|
||||
5. **Run:**
|
||||
4. **Run the Bot:**
|
||||
```bash
|
||||
python MyStarEN.py
|
||||
python MyStarENG.py
|
||||
```
|
||||
|
||||
### 📄 License
|
||||
|
||||
Reference in New Issue
Block a user