Skip to content

ModelReins

Distributed AI job orchestration. The SETI@Home for language models.

ModelReins turns idle machines into an AI compute mesh. You install a worker on any device — your laptop, a Raspberry Pi, a cloud VM — and it picks up AI jobs from a shared queue, runs them against any of 7 supported providers, and reports results back to a single dashboard.

Think of it as SETI@Home, but for language model inference. Instead of scanning radio signals, your machines process completions, summaries, code reviews, and extractions while you sleep. Jobs flow in, workers pick them up, results flow out.

  • 7 providers — Claude, OpenAI, Gemini, Ollama, LM Studio, OpenRouter, 1minAI
  • 1 dashboard — monitor every worker, job, and dollar from one screen
  • $1.47/week — real cost for a team running 312 jobs across mixed providers
  • 0 vendor lock-in — swap providers per job, per route, or per worker
Terminal window
npm install -g @mediagato/modelreins
modelreins init
modelreins worker start

That gives you a single worker connected to your default provider. From there, dispatch a job:

Terminal window
modelreins job dispatch --prompt "Summarize this quarter's changelog" --input ./CHANGELOG.md

The worker picks it up, runs it, and stores the result. Check the dashboard or pull the result from the CLI:

Terminal window
modelreins job result <job-id>

Providers

Compare all 7 providers — cost, speed, local vs cloud. See providers →

MCP Channel

Plug ModelReins into Claude Code, VS Code, or any MCP client. Set up MCP →

Cost Optimization

Run 312 jobs/week for $1.47. Real numbers, real strategies. Optimize costs →