API
Overview
RuntimeDog provides a simple JSON API for programmatic access to runtime data. The API returns static JSON and does not require authentication.
Endpoint
GET /api/runtimes.json
Returns a JSON object containing all runtimes with their metadata and scores.
View live endpoint âResponse Format
{
"version": "1.0",
"generated": "2026-01-18T00:00:00.000Z",
"count": 10,
"runtimes": [
{
"id": "wasmtime",
"name": "Wasmtime",
"tagline": "Fast, secure WebAssembly runtime",
"type": "wasm",
"execution": "aot",
"interface": "cli",
"languages": ["Rust", "C", "C++", ...],
"isolation": "process",
"maturity": "production",
"performance": {
"cold_start_ms": 1,
"memory_mb": 5,
"startup_overhead_ms": 0.5
},
"score": 88,
"license": "Apache-2.0",
"website": "https://wasmtime.dev",
"github": "https://github.com/...",
"docs": "https://docs.wasmtime.dev"
},
...
]
}Fields
| Field | Type | Description |
|---|---|---|
| id | string | Unique identifier (URL-safe) |
| name | string | Display name |
| type | string | language | wasm | container | microvm | edge | serverless |
| execution | string | interpreted | jit | aot | hybrid |
| score | number | RuntimeScore (0-100) |
| performance | object | cold_start_ms, memory_mb, startup_overhead_ms |
Usage Example
// Fetch all runtimes
const res = await fetch('https://runtimedog.com/api/runtimes.json');
const data = await res.json();
// Filter by type
const wasmRuntimes = data.runtimes.filter(r => r.type === 'wasm');
// Sort by score
const topRated = data.runtimes.sort((a, b) => b.score - a.score);đ Local LLM API
Dedicated endpoints for local LLM tools and stack recommendations.
GET /api/local-llm.json
Returns all local LLM tools (launchers, engines, formats, backends).
View live endpoint â{
"count": 25,
"runtimes": [
{
"id": "ollama",
"name": "Ollama",
"role": "launcher",
"localFitScore": 95,
"backends": ["cuda", "metal", "rocm", "cpu"],
"formats": ["gguf"],
"install": { "mac": "brew install ollama", ... },
...
},
...
]
}GET /api/local-stacks.json
Returns pre-configured stacks by hardware target (NVIDIA/Mac/CPU/AMD).
View live endpoint â{
"count": 4,
"targets": ["nvidia", "mac", "cpu", "amd"],
"stacks": [
{
"target": "nvidia",
"description": "NVIDIA GPU users...",
"bands": [
{
"vram_gb": 8,
"label": "8GB VRAM (RTX 3060/3070)",
"recipes": [
{
"name": "Beginner",
"launcher": "ollama",
"engine": "llama.cpp",
"formats": ["gguf"],
"quant_hint": "Q4_K_M",
"install_steps": ["curl ...", "ollama pull ..."],
"notes": "7Bă˘ăăŤĺżŤéŠ"
},
...
]
},
...
]
},
...
]
}Notes
- No authentication required
- No rate limits (please be reasonable)
- Data is updated periodically
- CORS enabled for browser access