← back to guides

Quickstart: Windows 11

The simplest way to run codehamr on Windows 11. Ollama Desktop does the model, codehamr runs in CMD, that's it.

No WSL, no Docker, no PowerShell tricks. For the sandboxed path see WSL2 or the devcontainer guide.

1. Install Ollama Desktop

Download from ollama.com/download, run the installer. The app starts a background service on http://127.0.0.1:11434.

In the Ollama app, open Settings and set Context length to 64k or more (depending on your machine). The default 4k cap silently breaks coding agents.

2. Pull a model

Open Command Prompt (Win + R, type cmd, Enter):

ollama pull qwen3.6:27b

About 17 GB. You need 32 GB+ unified memory or a 24 GB+ GPU. Less than that, pick a smaller tag or grab a HamrPass instead.

3. Install codehamr

Same CMD window:

curl -fsSL https://codehamr.com/install.cmd -o install.cmd && install.cmd

Close and reopen CMD so the new PATH takes effect.

4. Point codehamr at Ollama

In your project folder:

.codehamr\config.yaml

active: local

models:
    local:
        llm: qwen3.6:27b
        url: http://localhost:11434
        key: ""
        context_size: 65536

context_size here must be the value you set in Ollama. Larger silently truncates server-side.

5. Run

codehamr

First prompt is slower while Ollama loads the model into memory. Every prompt after is fast.

If something doesn't work

curl http://127.0.0.1:11434/v1/models
  • JSON with qwen3.6:27b → Ollama is fine, recheck url: in your config.
  • Connection refused → Ollama Desktop isn't running. Start it from the system tray.
  • codehamr: not recognized → open a fresh CMD window so PATH reloads.

That's the whole setup. One app, one command, one config.