Select runtimes to compare side by side. Click chips below to toggle selection.
| Metric | AWS Lambda | llama.cpp | LLM (Python CLI) | Text Generation Inference |
|---|---|---|---|---|
| Score | B-65 | C+63 | D49 | F39 |
| Type | Serverless | |||
| Execution | hybrid | aot | hybrid | hybrid |
| Interface | platform | cli | cli | api |
| Cold Start | 200ms | 100ms | 500ms | 10000ms |
| Memory | 128MB | 50MB | 100MB | 2000MB |
| Startup | 100ms | 10ms | 100ms | 5000ms |
| Isolation | microvm | process | process | container |
| Maturity | production | production | stable | production |
| Languages | JavaScript, TypeScript, Python, Java, Go, Ruby, .NET, Rust | C, C++ | Python | Rust, Python |
| License | Proprietary | MIT | Apache-2.0 | Apache-2.0 |
| Links |