Local
Built from scratch for local LLM coding. A ~30B class model like qwen3.6:27b codes smoothly on a beefy Macbook or 32 GB+ VRAM. No GPU? That's what HamrPass is for.
simplicity over complexity
Built by people who like quiet tools. Bring your own key, run it locally, or grab a prepaid HamrPass. No subscription, no login.
we love local
We speak the standard API they all share. Nothing more, nothing less.
pick the right tool
free, DIY, or curated.
Built from scratch for local LLM coding. A ~30B class model like qwen3.6:27b codes smoothly on a beefy Macbook or 32 GB+ VRAM. No GPU? That's what HamrPass is for.
Any OpenAI-compatible endpoint works. OpenRouter, your own proxy, your models, your bill. Flexible where it counts, but we avoid proprietary endpoints that lock you in.
Prepaid. No login, no auto-renew, no subscription. Burn tokens when you need them. We did the model hunt for you. Optimized for codehamr, ready out of the box.
Watch for launch ↗setup
install
curl -fsSL https://raw.githubusercontent.com/codehamr/codehamr/main/install.sh | bash
Generated at .codehamr/config.yaml on first start, in your working directory. Yours to edit.
.codehamr/config.yaml
active: local
models:
local:
llm: qwen3.6:27b
url: http://host.docker.internal:11434
key: ""
context_size: 64000
openai:
llm: gpt-5.5
url: https://api.openai.com
key: sk-...
context_size: 128000
hamrpass:
llm: hamrpass
url: https://codehamr.plaxtoris.de
key: hp_...
context_size: 512000
Project rules or custom system prompts? They go into .codehamr/CODEHAMR.md. Optional, additive, one file. The single point of adjustment.
install
Get the binary. Run it locally. Pick up a HamrPass when local can't keep up.