Select runtimes to compare side by side. Click chips below to toggle selection.
| Metric | Python (CPython) | llama.cpp | Docker | Text Generation Inference |
|---|---|---|---|---|
| Score | B71 | C+63 | C-54 | F39 |
| Type | Language | Container | ||
| Execution | interpreted | aot | hybrid | hybrid |
| Interface | cli | cli | cli | api |
| Cold Start | 50ms | 100ms | 500ms | 10000ms |
| Memory | 15MB | 50MB | 50MB | 2000MB |
| Startup | 10ms | 10ms | 200ms | 5000ms |
| Isolation | process | process | container | container |
| Maturity | production | production | production | production |
| Languages | Python | C, C++ | Any | Rust, Python |
| License | Other | MIT | Apache-2.0 | Apache-2.0 |
| Links |