---
name: runpod-anime
description: Manage RunPod ComfyUI instance for anime image generation using Animagine XL 3.1 (SDXL). Handles pod start/stop/status, ComfyUI API integration for queuing prompts, workflow management, and anime prompt engineering with Danbooru-style tags.
---

# RunPod Anime Generation Skill

## Overview

This skill manages a RunPod instance running ComfyUI with **Animagine XL 3.1** (SDXL-based anime checkpoint) on an **NVIDIA A100 80GB PCIe** GPU. It handles prompt queuing, workflow management, and anime-specific prompt engineering.

## Instance Details

- **ComfyUI URL:** `https://n7ecweltbele73-8188.proxy.runpod.net/`
- **Pod Name:** anime-op-v3-netvolume
- **GPU:** NVIDIA A100 80GB PCIe
- **Network Volume:** 200GB (persists across migrations)

## Network Volume Paths

Models persist at:
- Checkpoints: `/workspace/runpod-slim/ComfyUI/models/checkpoints/`
- Outputs: `/workspace/runpod-slim/ComfyUI/output/`

Current checkpoint: `sd_xl_anime_final.safetensors` (6.5GB, already on volume)

## Pod Management Commands

### Check Pod Status
```bash
python3 /mnt/c/Users/fbmor/runpod_anime_automation.py --status
```

### Start Pod
```bash
python3 /mnt/c/Users/fbmor/runpod_anime_automation.py --start
```

### Stop Pod
```bash
python3 /mnt/c/Users/fbmor/runpod_anime_automation.py --stop
```

## ComfyUI API Integration

### Check System Status
```bash
curl -s "https://bvlnsjffdkgdkt-8188.proxy.runpod.net/api/system_stats"
```

### Queue a Prompt (Text-to-Image)

POST to `/api/prompt` with the following JSON structure:

```json
{
  "prompt": {
    "1": {
      "class_type": "CheckpointLoaderSimple",
      "inputs": {"ckpt_name": "animagine-xl-3.1.safetensors"}
    },
    "2": {
      "class_type": "CLIPTextEncode",
      "inputs": {
        "text": "<POSITIVE_PROMPT>",
        "clip": ["1", 1]
      }
    },
    "3": {
      "class_type": "CLIPTextEncode",
      "inputs": {
        "text": "<NEGATIVE_PROMPT>",
        "clip": ["1", 1]
      }
    },
    "4": {
      "class_type": "EmptyLatentImage",
      "inputs": {"width": 832, "height": 1216, "batch_size": 1}
    },
    "5": {
      "class_type": "KSampler",
      "inputs": {
        "seed": 42,
        "steps": 28,
        "cfg": 7,
        "sampler_name": "euler_ancestral",
        "scheduler": "normal",
        "denoise": 1.0,
        "model": ["1", 0],
        "positive": ["2", 0],
        "negative": ["3", 0],
        "latent_image": ["4", 0]
      }
    },
    "6": {
      "class_type": "VAEDecode",
      "inputs": {"samples": ["5", 0], "vae": ["1", 2]}
    },
    "7": {
      "class_type": "SaveImage",
      "inputs": {"filename_prefix": "AnimagineXL_", "images": ["6", 0]}
    }
  }
}
```

Example curl command:
```bash
curl -s -X POST "https://bvlnsjffdkgdkt-8188.proxy.runpod.net/api/prompt" \
  -H "Content-Type: application/json" \
  -d '{"prompt": { ... }}'
```

**Response:** `{"prompt_id": "<uuid>", "number": 1, "node_errors": {}}` on success.

### Check Queue Status
```bash
curl -s "https://bvlnsjffdkgdkt-8188.proxy.runpod.net/api/queue"
```

### Get Generation History
```bash
curl -s "https://bvlnsjffdkgdkt-8188.proxy.runpod.net/api/history"
```

### View Generated Images
```bash
curl -s "https://bvlnsjffdkgdkt-8188.proxy.runpod.net/api/view?filename=AnimagineXL_00001_.png&subfolder=&type=output"
```

## KSampler Settings (Optimized for Animagine XL 3.1)

| Parameter | Value | Notes |
|---|---|---|
| steps | 28 | High quality; can reduce to 20 for speed |
| cfg | 7 | Sweet spot for Animagine XL |
| sampler_name | euler_ancestral | Best for anime style, adds natural variation |
| scheduler | normal | Also works: karras |
| denoise | 1.0 | Full generation (reduce for img2img) |
| seed | randomize or fixed | Use fixed for reproducibility |

## Supported Resolutions (SDXL Native)

| Aspect | Width | Height | Use Case |
|---|---|---|---|
| Portrait | 832 | 1216 | Characters, full body |
| Landscape | 1216 | 832 | Scenes, environments |
| Square | 1024 | 1024 | Headshots, icons |
| Wide | 1344 | 768 | Cinematic, panoramic |
| Tall | 768 | 1344 | Vertical scenes |

**Important:** Always use SDXL-native resolutions. Non-standard sizes cause artifacts.

## Anime Prompt Engineering (Animagine XL 3.1)

### Prompt Format

Animagine XL uses **Danbooru-style tags** (comma-separated), NOT natural language sentences.

### Quality Tags (Always Include)

Start every positive prompt with quality boosters:
```
masterpiece, best quality, very aesthetic, absurdres
```

### Prompt Structure

```
<quality tags>, <character description>, <scene/action>, <background>, <style tags>
```

### Example Positive Prompts

**Character portrait:**
```
masterpiece, best quality, very aesthetic, absurdres, 1girl, solo, long hair, silver hair, red eyes, detailed face, school uniform, sailor collar, pleated skirt, standing, cherry blossoms, spring, blue sky, detailed background
```

**Action scene:**
```
masterpiece, best quality, very aesthetic, absurdres, 1boy, solo, spiky hair, black hair, glowing eyes, battle stance, sword, energy aura, dynamic pose, ruins, dramatic lighting, dark atmosphere
```

**Group scene:**
```
masterpiece, best quality, very aesthetic, absurdres, 2girls, holding hands, smiling, flower field, sunset, wind, flowing hair, white dress, summer, warm colors
```

### Standard Negative Prompt

Always use this negative prompt unless specifically adjusted:
```
lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name
```

### Useful Tags Reference

**Hair:** long hair, short hair, twintails, ponytail, braids, silver hair, blue hair, black hair, blonde hair
**Eyes:** blue eyes, red eyes, green eyes, heterochromia, glowing eyes, detailed eyes
**Clothing:** school uniform, armor, dress, kimono, hoodie, military uniform, maid outfit
**Expression:** smile, blush, crying, angry, surprised, closed eyes, open mouth
**Pose:** standing, sitting, running, dynamic pose, looking at viewer, from above, from below
**Background:** detailed background, simple background, gradient background, outdoors, indoors, night sky, city, forest
**Lighting:** dramatic lighting, backlighting, rim lighting, soft lighting, golden hour
**Style:** anime coloring, flat color, cel shading, painterly, sketch, lineart

### Tags to Avoid

- Natural language sentences (the model doesn't understand them well)
- Conflicting tags (e.g., both "smile" and "crying" unless intentional)
- Too many characters (3+ characters degrades quality significantly)

## Workflow JSON (for Visual Canvas Loading)

This workflow JSON can be pasted (Ctrl+V) onto the ComfyUI canvas to load the visual node graph:

```json
{"last_node_id":7,"last_link_id":9,"nodes":[{"id":1,"type":"CheckpointLoaderSimple","pos":[50,200],"size":[315,98],"flags":{},"order":0,"mode":0,"outputs":[{"name":"MODEL","type":"MODEL","links":[1],"slot_index":0},{"name":"CLIP","type":"CLIP","links":[2,3],"slot_index":1},{"name":"VAE","type":"VAE","links":[4],"slot_index":2}],"properties":{"Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["animagine-xl-3.1.safetensors"],"title":"Load Checkpoint"},{"id":2,"type":"CLIPTextEncode","pos":[450,100],"size":[420,164],"flags":{},"order":1,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":2}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[5],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["1girl, solo, long hair, blue eyes, school uniform, cherry blossoms, spring, detailed background, masterpiece, best quality, very aesthetic, absurdres"],"title":"Positive Prompt","color":"#232","bgcolor":"#353"},{"id":3,"type":"CLIPTextEncode","pos":[450,340],"size":[420,164],"flags":{},"order":2,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":3}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[6],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, nsfw"],"title":"Negative Prompt","color":"#322","bgcolor":"#533"},{"id":4,"type":"EmptyLatentImage","pos":[450,570],"size":[315,106],"flags":{},"order":3,"mode":0,"outputs":[{"name":"LATENT","type":"LATENT","links":[7],"slot_index":0}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[832,1216,1],"title":"Empty Latent Image"},{"id":5,"type":"KSampler","pos":[950,200],"size":[315,262],"flags":{},"order":4,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":1},{"name":"positive","type":"CONDITIONING","link":5},{"name":"negative","type":"CONDITIONING","link":6},{"name":"latent_image","type":"LATENT","link":7}],"outputs":[{"name":"LATENT","type":"LATENT","links":[8],"slot_index":0}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[42,"fixed",28,7,"euler_ancestral","normal",1.0],"title":"KSampler"},{"id":6,"type":"VAEDecode","pos":[1350,250],"size":[210,46],"flags":{},"order":5,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":8},{"name":"vae","type":"VAE","link":4}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[9],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecode"},"title":"VAE Decode"},{"id":7,"type":"SaveImage","pos":[1650,200],"size":[315,270],"flags":{},"order":6,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":9}],"properties":{"Node name for S&R":"SaveImage"},"widgets_values":["AnimagineXL_"],"title":"Save Image"}],"links":[[1,1,0,5,0,"MODEL"],[2,1,1,2,0,"CLIP"],[3,1,1,3,0,"CLIP"],[4,1,2,6,1,"VAE"],[5,2,0,5,1,"CONDITIONING"],[6,3,0,5,2,"CONDITIONING"],[7,4,0,5,3,"LATENT"],[8,5,0,6,0,"LATENT"],[9,6,0,7,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.9,"offset":[-20,50]}},"version":0.4}
```

## Troubleshooting

- **"Missing Models" error in ComfyUI:** The old Wan 2.1 workflow nodes still reference deleted models. Clear the canvas and load the Animagine XL workflow above.
- **API returns 500:** Check that the checkpoint name matches exactly: `animagine-xl-3.1.safetensors`
- **Low quality output:** Ensure quality tags are at the START of the positive prompt. Use SDXL-native resolutions only.
- **File uploads are BLOCKED** (405 Method Not Allowed) — cannot upload workflows via API. Use the workflow JSON above and paste onto canvas with Ctrl+V.

## IP-Adapter Workflow (For Exact Character Consistency)

For exact character matches to your reference images, use IP-Adapter in the web interface.

### Character Reference Images

Reference images located at:
```
P:\Ai\Openclaw\shared-exchange\Broken Spire\book-vault\characters\Characters images concepts\
```

Files:
- `Ash.png` / `Ash 2.png` - Main character reference
- `Far-Future Ash.png` - Villain version
- `Everly.png` - Military character
- `Éva Moreau.png` - Doctor/scientist
- `Nova Human.png` / `Nova devil.png` - Warrior (human and devil forms)
- `Violet humaine.png` / `Violet Devil.png` - Duality character
- `Lin Weishan.png` - Asian fighter
- `TC-23.png` - Esper/mechanical
- `Jonas.png` - Ghost/memory
- `Seraphine Vale.png` - Royal character
- `TK.png` - Additional character
- `Esper.png` - Additional esper

### IP-Adapter Setup in Web UI

1. **Open ComfyUI:** `https://bvlnsjffdkgdkt-8188.proxy.runpod.net`

2. **Load Checkpoint:** Select `animagine-xl-3.1.safetensors`

3. **Add IP-Adapter Unified Loader:**
   - Double-click canvas → search "IPAdapter Unified Loader"
   - Connect MODEL from Checkpoint to its model input
   - This auto-loads IP-Adapter Plus + CLIP Vision models

4. **Add IPAdapter Advanced:**
   - Double-click → search "IPAdapter Advanced"
   - Connect IPAdapter Unified Loader MODEL → IPAdapter Advanced model input

5. **Add Load Image (for each character reference):**
   - Double-click → search "Load Image"
   - Load your character reference (e.g., Ash.png)
   - Connect to IPAdapter Advanced image input

6. **Connect Flow:**
   - IPAdapter Advanced MODEL → KSampler model input
   - CLIP Text Encode (positive/negative) → KSampler conditioning
   - Empty Latent Image → KSampler latent_image
   - KSampler → VAE Decode → Save Image

7. **Generate:** The output will match your reference character exactly

### Character Consistency Tips

- Use the SAME reference image for all appearances of a character
- For Far-Future Ash (evil version): use `Far-Future Ash.png`
- For Nova's two forms: use `Nova Human.png` and `Nova devil.png` separately
- For Violet duality: generate twice with each reference

### Generating Multiple Frames with IP-Adapter

For the anime opening, repeat for each frame:
1. Load appropriate character reference(s)
2. Enter scene prompt
3. Generate
4. Save to output folder

### Download Generated Images

```bash
# List all outputs
curl -s "https://bvlnsjffdkgdkt-8188.proxy.runpod.net/api/history" | python3 -c "
import sys,json; d=json.load(sys.stdin)
for k,v in d.items():
    outputs = v.get('outputs',{})
    for node,data in outputs.items():
        imgs = data.get('images',[])
        for img in imgs: print(img['filename'])
"

# Download specific image
curl -s "https://bvlnsjffdkgdkt-8188.proxy.runpod.net/api/view?filename=IMAGE_NAME.png&subfolder=&type=output" -o output.png
```

## Iterative Consistency Correction

For automated quality control - generates images, analyzes against references, and regenerates until consistency threshold is met.

### Quick Start

```bash
cd /mnt/c/Users/fbmor/broken-spire-comparison
python3 automated_consistency_workflow.py --iterative
```

### Commands

| Command | Description |
|---------|-------------|
| `--analyze` | Only analyze, don't regenerate |
| `--iterative` | Run iterative correction loop |
| `--threshold` | Consistency threshold (default: 0.85 = 85%) |
| `--max-iterations` | Max iterations (default: 10) |

### How It Works

1. **Generate** - Queue prompts to RunPod ComfyUI
2. **Analyze** - Compare each generated frame against character reference using perceptual hash + SSIM
3. **Score** - Calculate consistency score (0-100%)
4. **Loop** - If score < 85%, adjust prompt and regenerate
5. **Repeat** - Until all frames pass threshold or max iterations reached

### Analysis Metrics

- **Perceptual Hash (pHash)** - Detects structural similarity
- **SSIM** - Measures perceived quality differences
- **Combined Score** - Weighted average targeting 85% threshold

### Reference Images

Character references located at:
```
P:\Ai\Openclaw\shared-exchange\Broken Spire\book-vault\characters\Characters images concepts\
```

Files:
- `Ash.png` - Main character
- `Far-Future Ash.png` - Villain version
- `Everly.png` - Military character
- `Éva Moreau.png` - Doctor/scientist
- `Nova Human.png` - Warrior (human form)
- `Violet Devil.png` / `Violet humaine.png` - Duality character
- `Lin Weishan.png` - Fighter
- `TC-23.png` - Esper
- `Jonas.png` - Ghost/memory

### Troubleshooting

- **"No image in history"** - Start the RunPod pod first
- **"Low consistency score"** - Add more specific character tags to prompt
- **"Consistent failures"** - Check reference image matches desired output

Note: RunPod pod must be running. Start with:
```bash
python3 /mnt/c/Users/fbmor/runpod_anime_automation.py --start
```

## Remote Access Setup

### Option 1: OpenSSH Server (Recommended for full terminal access)

Run in PowerShell **as Administrator**:

```powershell
# Install OpenSSH Server
Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0

# Start the service
Start-Service sshd

# Check status
Get-Service sshd
```

Get your local IP:
```cmd
ipconfig
```
Look for IPv4 Address (e.g., 192.168.1.x)

### Option 2: Cloudflare Tunnel (Quick public URL)

```powershell
# Download cloudflared
irm https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-windows-amd64.exe -OutFile cloudflared.exe

# Create tunnel - gives public URL
.\cloudflared.exe tunnel login
.\cloudflared.exe tunnel create my-comfyui
.\cloudflared.exe tunnel url my-comfyui

# Or expose port directly:
.\cloudflared.exe tunnel --url http://localhost:8188
```

Once you have a public URL, I can access ComfyUI API directly.