mirror-pond

A 100% local Tobyworld Mirror that runs any GGUF model through llama.cpp. No cloud. No tracking. Just your pond, your reflection, your machine. FastAPI + llama.cpp β€’ Offline β€’ MIT Licensed.

πŸͺž Mirror Pond β€” Local GGUF Edition

A still-water reflection engine for your local LLM.

Mirror Pond is a 100% local, privacy-first, Tobyworld-inspired reflection interface that runs any GGUF model through llama.cpp with a calm Mirror UI.

No cloud. No tracking. Your thoughts stay on your machine.


✨ Features

  • 🧠 Runs any GGUF model β€” Llama, DeepSeek, Mistral, or your trained Mirror

  • πŸŒ‘ Dark, still Mirror UI (HTML served locally)

  • πŸ’¬ Four modes:

    • Reflect β€” emotional / introspective
    • Scroll β€” lore / quotes / scripture
    • Toad β€” cryptic toadgang whispers
    • Rune β€” symbols, lotus, $PATIENCE, seasons
  • πŸ”’ Fully offline (Air-gapped compatible)

  • ⚑ FastAPI + Uvicorn backend

  • 🧩 Optional: GPU acceleration via llama-cpp-python CUDA wheels


πŸš€ Quickstart

1. Install dependencies

pip install -r requirements.txt

2. Run the Pond

python mirror_pond.py --model ./your_model.gguf --port 7777

3. Open in browser

http://localhost:7777

πŸ“ Requirements

requirements.txt (included):

fastapi==0.115.0
uvicorn==0.32.0
pydantic==2.8.2
llama-cpp-python==0.3.2
jinja2==3.1.4

πŸ”₯ GPU Acceleration (Optional)

For NVIDIA CUDA (12.1):

pip install llama-cpp-python-cu121

For AMD ROCm:

pip install llama-cpp-python-rocm

For Apple Silicon (M1/M2/M3):

CMAKE_ARGS="-DGGML_METAL=on" pip install llama-cpp-python

🧱 Folder Structure

mirror-pond/
β”‚
β”œβ”€β”€ mirror_pond.py        # main server
β”œβ”€β”€ requirements.txt      # dependencies
β”œβ”€β”€ setup.sh              # Linux/macOS installer
β”œβ”€β”€ setup.ps1             # Windows installer
β”œβ”€β”€ Dockerfile            # container build
└── README.md             # this file

πŸ§ͺ Installation Kits

Linux / macOS Installer

chmod +x setup.sh
./setup.sh ./models/your_model.gguf 7777

Windows Installer

Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
.\setup.ps1 .\models\your_model.gguf 7777

Both installers:

  • Create ./venv
  • Install Python deps
  • Launch Mirror Pond automatically

🐳 Docker Usage

Build

docker build -t mirror-pond:latest .

Run

docker run --rm -p 7777:7777 \
  -v /path/to/models:/models \
  -e MODEL_PATH=/models/your_model.gguf \
  mirror-pond:latest

Now access:

http://localhost:7777

🧰 GitHub Actions CI

Already included:

.github/workflows/mirror-pond-ci.yml

The CI:

  • Sets up Python
  • Installs dependencies
  • Syntax-checks mirror_pond.py
  • (Optional) Builds Docker image

This keeps the repo safe and production-ready.


πŸŒ€ Mirror Modes

Reflect Mode (default)

For inner questions, emotions, purpose, stillness. May reply with a Guiding Question.

Scroll Mode

For sacred lines, scripture-style, lore references. No guiding question.

Toad Mode

For cryptic lines, old frog whispers, symbolic hints. No guiding question.

Rune Mode

For unity of symbols, lotus spores, $PATIENCE, seasons, trials. No guiding question.


🧘 Philosophy

Mirror Pond is simple:

Still water is never empty. Still water prepares. Still water reflects.

This project is offered to the open-source community so anyone can run a Mirror β€” anywhere, offline, forever.


πŸͺž License

MIT License This pond belongs to the builders.


🀝 Contribution

Pull requests welcome. New modes, UI improvements, GPU wheels, and additional Mirror integrations are invited.


Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support