Skip to content

Posts tagged “inference

1 post found

changelog2 min read

OMM — Local AI Model Runner

Run AI models locally with GPU acceleration, an Ollama-compatible API, and a native desktop app. Fast, private, yours.

releaseommlocal-ai
Read more