Sandbox: VS Code devcontainer
Codehamr runs shell commands written by an LLM. A devcontainer keeps every command inside a Linux container you can throw away. Your project files mount in, the model stays on the host's GPU.
Works the same on macOS, Linux, and Windows (where you can also use WSL2 as the sandbox instead).
What you need
- VS Code with the Dev Containers extension.
- Docker Desktop (macOS, Windows) or Docker Engine (Linux).
- Ollama on the host machine with a model pulled. See the Ollama guide or the Windows quickstart.
The layout
Two files inside your project folder:
codehamr-sandbox/
├── .devcontainer/
│ └── devcontainer.json
└── .codehamr/
└── config.yaml
No Dockerfile, no launch.json, no boilerplate.
1. Devcontainer config
.devcontainer/devcontainer.json
{
"name": "codehamr-sandbox",
"image": "mcr.microsoft.com/devcontainers/base:debian",
"postCreateCommand": "curl -fsSL https://codehamr.com/install.sh | bash",
"runArgs": ["--add-host=host.docker.internal:host-gateway"]
}
runArgs is needed on Linux. macOS and Windows get host.docker.internal for free, but the line is harmless there.
2. Codehamr config
The one thing different from bare metal: host.docker.internal, not localhost.
.codehamr/config.yaml
active: local
models:
local:
llm: qwen3.6:27b
url: http://host.docker.internal:11434
key: ""
context_size: 65536
From inside the container, localhost is the container. host.docker.internal is your host machine.
3. Open it
Ctrl/Cmd + Shift + P → Dev Containers: Reopen in Container. First boot pulls the base image and installs codehamr (~1 min).
Then in the integrated terminal:
codehamr
If something doesn't work
curl -s http://host.docker.internal:11434/v1/models
- JSON → recheck
url:in your config. - Connection refused → Ollama isn't running on the host.
codehamr: command not found→ open a fresh terminal soPATHreloads.
One container, one config, one local model. Done.