🏠

Local LLM Hub

Run LLMs on your own hardware. Find the right launcher, engine, and configuration for your setup.

NVIDIA-first. Mac-strong. Pick your GPU, get your stack.

Quick Start: AMD (16GB VRAM (RX 7900 XT))

Beginnergguf
ollama + llama.cpp
Quant: Q4_K_M to Q5_K_M
ROCm supported. Setup more complex than NVIDIA
Powergguf
text-generation-webui + llama.cpp
Quant: Q5_K_M
ROCm environment setup required
2 local LLM tools
NameRoleBackendsFormatsScoreInstall
GGUF
GPT-Generated Unified Format for efficient LLM storage
Formatcuda, metal, rocm...-
safetensors
Safe and fast tensor serialization format by Hugging Face
Formatcuda, metal, rocm...-