-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Description
Description
Local Ollama inference blocked by OpenShell egress proxy — RFC1918 addresses return 403 regardless of policy
Labels: bug, local-inference, ollama
Environment
- NemoClaw version: 0.1.0
- OpenShell version: 0.0.13
- OpenClaw version: 2026.3.11
- OS (host): Ubuntu 24.04
- Ollama host: Fedora (separate machine, same LAN)
Description
When attempting to use a local Ollama instance as the inference provider, all requests from inside the sandbox are blocked with HTTP 403 Forbidden by the OpenShell egress proxy at 10.200.0.1:3128. This happens regardless of what is added to the network policy YAML.
Steps to reproduce
- Install NemoClaw and complete onboarding (wizard does not offer a local Ollama path at step 4/7 — only cloud models are shown).
- Manually update openclaw.json to point baseUrl at a local Ollama instance.
- Add a custom policy preset allowing the Ollama host and port via openshell policy set.
- From inside the sandbox, run: curl http://:11434/api/tags
Expected behaviour
Request reaches the Ollama instance and returns a model list, as permitted by the policy.
Actual behaviour
HTTP/1.1 403 Forbidden
What we tried
- Direct LAN IP 192.168.1.16:11434 → 403
- Hostname fedora.local:11434 → 403
- nginx reverse proxy on Docker host LAN IP 192.168.1.164:11435 → 403
- nginx on Docker bridge 172.18.0.1:11435 → timeout (network namespace isolation)
- host.openshell.internal:11435 → 403
- ngrok public HTTPS tunnel https://.ngrok-free.dev → 403
- Adding all above hosts to openclaw-sandbox.yaml and pushing via openshell policy set --wait → 403 in all cases
- Setting no_proxy env var inside sandbox → bypasses proxy but sandbox network namespace cannot reach LAN directly
Root cause hypothesis
The OpenShell proxy appears to have a hardcoded block on RFC1918 private address ranges (10.x, 172.x, 192.168.x) that operates below the declarative policy YAML layer. Even with explicit allow rules and confirmed policy version loading, 403 is returned. Public hostnames (ngrok) are also blocked, suggesting the proxy enforces a strict whitelist rather than a blocklist.
Request
- A supported path for routing inference to a local Ollama instance on the same LAN
- A --allow-private or --no-proxy flag for the OpenShell gateway
- Or documentation on the correct policy format for local/private inference endpoints
Reproduction Steps
1
Install NemoClaw and complete onboarding with any cloud model (wizard does not offer a local Ollama path during step 4/7).
2
Manually update openclaw.json to point baseUrl at a local Ollama instance.
3
Add a custom policy preset allowing the Ollama host and port.
4
From inside the sandbox, run: curl http://:11434/api/tags
Environment
NemoClaw version: 0.1.0
OpenShell version: 0.0.13
OpenClaw version: 2026.3.11
OS (host): Ubuntu 24.04
Docker version: latest
Ollama host: Fedora (separate machine, same LAN)
Ollama version: latest
Debug Output
Logs
Checklist
- I confirmed this bug is reproducible
- I searched existing issues and this is not a duplicate