diff --git a/docs/inference/configure.md b/docs/inference/configure.md index 4b24f958..07b99199 100644 --- a/docs/inference/configure.md +++ b/docs/inference/configure.md @@ -82,7 +82,7 @@ $ openshell provider create \ Use `--config OPENAI_BASE_URL` to point to any OpenAI-compatible server running where the gateway runs. For host-backed local inference, use `host.openshell.internal` or the host's LAN IP. Avoid `127.0.0.1` and `localhost`. Set `OPENAI_API_KEY` to a dummy value if the server does not require authentication. :::{tip} -For a self-contained setup, the Ollama community sandbox bundles Ollama inside the sandbox itself — no host-level provider needed. See {doc}`/tutorials/local-inference-ollama` for details. +For a self-contained setup, the Ollama community sandbox bundles Ollama inside the sandbox itself — no host-level provider needed. See {doc}`/tutorials/inference-ollama` for details. ::: Ollama also supports cloud-hosted models using the `:cloud` tag suffix (e.g., `qwen3.5:cloud`). @@ -189,7 +189,7 @@ A successful response confirms the privacy router can reach the configured backe Explore related topics: - To understand the inference routing flow and supported API patterns, refer to {doc}`index`. -- To follow a complete Ollama-based local setup, refer to {doc}`/tutorials/local-inference-ollama`. +- To follow a complete Ollama-based local setup, refer to {doc}`/tutorials/inference-ollama`. - To follow a complete LM Studio-based local setup, refer to {doc}`/tutorials/local-inference-lmstudio`. - To control external endpoints, refer to [Policies](/sandboxes/policies.md). - To manage provider records, refer to {doc}`../sandboxes/manage-providers`. diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md index c06126e3..e3f029c2 100644 --- a/docs/tutorials/index.md +++ b/docs/tutorials/index.md @@ -45,7 +45,7 @@ Launch Claude Code in a sandbox, diagnose a policy denial, and iterate on a cust ::: :::{grid-item-card} Inference with Ollama -:link: local-inference-ollama +:link: inference-ollama :link-type: doc Route inference through Ollama using cloud-hosted or local models, and verify it from a sandbox. @@ -68,6 +68,6 @@ Route inference to a local LM Studio server via the OpenAI or Anthropic compatib First Network Policy GitHub Push Access -Inference with Ollama +Inference with Ollama Local Inference with LM Studio ```