Ollama
Get up and running with large language models locally
D
Score: 49/100
Type
Execution
hybrid
Interface
cli
About
Ollama is the easiest way to run large language models locally. It bundles model weights, configuration, and data into a single package, defined by a Modelfile. Supports macOS, Linux, and Windows with automatic GPU detection for NVIDIA and Apple Silicon.
Performance
1000ms
Cold Start
500MB
Base Memory
100ms
Startup Overhead
✓ Last Verified
Date: Jan 18, 2026
Version:
0.5.4Method: install success
brew install + ollama run llama3.2 verified on macOS
Languages
PythonJavaScriptGo
Details
- Isolation
- process
- Maturity
- production
- License
- MIT