Providers: OpenRouter, OpenAI, anything compatible
Codehamr speaks OpenAI Chat Completions and nothing else. Every provider that exposes that endpoint plugs in identically.
The whole guide is one rule and one table.
The rule
When a profile sets url:, codehamr POSTs to ${url}/v1/chat/completions. Always exactly that suffix.
So the url: value is the part before /v1/chat/completions:
| Provider | url: value | Key |
|---|---|---|
| Ollama (local) | http://localhost:11434 | none |
| OpenRouter | https://openrouter.ai/api | openrouter.ai/keys |
| OpenAI | https://api.openai.com | platform.openai.com/api-keys |
| HamrPass | https://codehamr.com | by email after Polar checkout |
Get this wrong and the first request 404s. Almost always the bug.
One config, every profile
.codehamr/config.yaml
active: local
models:
local:
llm: qwen3.6:27b
url: http://localhost:11434
key: ""
context_size: 65536
openrouter:
llm: anthropic/claude-sonnet-4
url: https://openrouter.ai/api
key: sk-or-v1-...
context_size: 200000
openai:
llm: gpt-5.5
url: https://api.openai.com
key: sk-proj-...
context_size: 200000
hamrpass:
llm: hamrpass
url: https://codehamr.com
key: hp_...
context_size: 200000
Switch live inside a session with /models <name>.
Per-profile notes
local, no key.llm:matches a tag fromollama list. Full setup in the Ollama guide.openrouter. Any slug fromopenrouter.ai/modelsgoes inllm:. Mirror the model's context length intocontext_size.openai. The API host isapi.openai.com, notchatgpt.com. Different products.hamrpass. Thellm:value is decorative, the server picks the model. The budget bar in the TUI comes from theX-Budget-Remainingresponse header.
One config, every provider, one switch.