← back to guides

Providers: OpenRouter, OpenAI, anything compatible

Codehamr speaks OpenAI Chat Completions and nothing else. Every provider that exposes that endpoint plugs in identically.

The whole guide is one rule and one table.

The rule

When a profile sets url:, codehamr POSTs to ${url}/v1/chat/completions. Always exactly that suffix.

So the url: value is the part before /v1/chat/completions:

Providerurl: valueKey
Ollama (local)http://localhost:11434none
OpenRouterhttps://openrouter.ai/apiopenrouter.ai/keys
OpenAIhttps://api.openai.complatform.openai.com/api-keys
HamrPasshttps://codehamr.comby email after Polar checkout

Get this wrong and the first request 404s. Almost always the bug.

One config, every profile

.codehamr/config.yaml

active: local

models:
    local:
        llm: qwen3.6:27b
        url: http://localhost:11434
        key: ""
        context_size: 65536

    openrouter:
        llm: anthropic/claude-sonnet-4
        url: https://openrouter.ai/api
        key: sk-or-v1-...
        context_size: 200000

    openai:
        llm: gpt-5.5
        url: https://api.openai.com
        key: sk-proj-...
        context_size: 200000

    hamrpass:
        llm: hamrpass
        url: https://codehamr.com
        key: hp_...
        context_size: 200000

Switch live inside a session with /models <name>.

Per-profile notes

  • local, no key. llm: matches a tag from ollama list. Full setup in the Ollama guide.
  • openrouter. Any slug from openrouter.ai/models goes in llm:. Mirror the model's context length into context_size.
  • openai. The API host is api.openai.com, not chatgpt.com. Different products.
  • hamrpass. The llm: value is decorative, the server picks the model. The budget bar in the TUI comes from the X-Budget-Remaining response header.

One config, every provider, one switch.