AI launchers and inference engines for local LLM deployment.
| Name | Role | Type | Exec | Languages | Score | Cold Start | Memory |
|---|---|---|---|---|---|---|---|
| ONNX Runtime Cross-platform, high performance ML inferencing and training accelerator | Interop | engine | hybrid | Python, C++, C#, ... | C- | 500ms | 300MB |
| MLC LLM Machine Learning Compilation for LLMs | Interop | engine | aot | Python, C++ | F | 2000ms | 500MB |