Providers Overview
ModelReins supports 7 providers out of the box. Each worker can be configured to use one or more providers, and the routing system can direct jobs to the best provider based on cost, speed, or capability.
Provider comparison
Section titled “Provider comparison”| Provider | API Key Required | Cost Tier | Best For | Local/Cloud |
|---|---|---|---|---|
| Claude | Yes | $$ | Complex reasoning, code generation | Cloud |
| OpenAI | Yes | $$ | General purpose, embeddings, vision | Cloud |
| Gemini | Yes | $ | Long context, multimodal, free tier available | Cloud |
| Ollama | No | Free | Privacy, offline work, zero cost | Local |
| LM Studio | No | Free | High-volume local, GUI model management | Local |
| OpenRouter | Yes | $–$$$ | Model variety, fallback routing | Cloud |
| 1minAI | Yes | $ | Budget batch processing, simple tasks | Cloud |
Cost tiers
Section titled “Cost tiers”- Free — No API costs. Hardware electricity only. Ollama and LM Studio run entirely on your machine.
- $ — Under $0.01 per typical job. Gemini free tier, 1minAI, and OpenRouter with budget models.
- $$ — $0.01–$0.10 per job. Claude Haiku, GPT-4o-mini, Gemini Pro.
- $$$ — $0.10+ per job. Claude Opus, GPT-4o, Gemini Ultra. Reserved for complex tasks.
Choosing a provider
Section titled “Choosing a provider”Start local. Install Ollama, pull llama3.2, and run your first jobs at zero cost. This validates your setup without touching a credit card.
Add a cloud provider for quality. When you need better output — complex code, nuanced summaries — add Claude or OpenAI. Use routing rules to send only the jobs that need it.
Use OpenRouter as a fallback. It aggregates dozens of models and handles rate limits across providers. Good insurance for production workloads.
Mix and match. A single ModelReins deployment can use all 7 providers simultaneously. Route cheap jobs to local models, complex jobs to Claude, and everything else to the best price/performance ratio on OpenRouter.
Configuration
Section titled “Configuration”Every provider is configured through environment variables or the modelreins.config.json file:
# Set the default providerexport MODELREINS_PROVIDER=ollama
# Or configure per-workermodelreins worker start --provider claude --name cloud-workermodelreins worker start --provider ollama --name local-workerSee each provider’s page for specific setup instructions.