From 73c123adb126fe39ab7ffbeb0d23491f96c9be43 Mon Sep 17 00:00:00 2001 From: Rugved Somwanshi Date: Mon, 13 Apr 2026 11:47:17 -0400 Subject: [PATCH] Add OpenClaw docs --- 4_integrations/index.md | 1 + 4_integrations/openclaw.md | 61 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 62 insertions(+) create mode 100644 4_integrations/openclaw.md diff --git a/4_integrations/index.md b/4_integrations/index.md index 26f011f..e9d92af 100644 --- a/4_integrations/index.md +++ b/4_integrations/index.md @@ -15,3 +15,4 @@ We provide guides below for popular tools and are constantly expanding this list - [Claude Code](/docs/integrations/claude-code) - [Codex](/docs/integrations/codex) +- [OpenClaw](/docs/integrations/openclaw) diff --git a/4_integrations/openclaw.md b/4_integrations/openclaw.md new file mode 100644 index 0000000..0e86183 --- /dev/null +++ b/4_integrations/openclaw.md @@ -0,0 +1,61 @@ +--- +title: OpenClaw +description: Use OpenClaw with LM Studio +index: 3 +--- + +OpenClaw now supports LM Studio as a native model provider. +See: [OpenClaw Docs](https://docs.openclaw.ai/providers/lmstudio). + + + +```lms_protip +Have a powerful LLM rig? Use [LM Link](/docs/integrations/lmlink) to run OpenClaw from your laptop while the model runs on your rig. +``` + +### 1) Start LM Studio's local server + +Make sure LM Studio is running as a server (default port `1234`). + +You can start it from the app, or from the terminal with `lms`: + +```bash +lms server start --port 1234 +``` + +### 2) Run Openclaw with LM Studio as model provider + +Install OpenClaw as normal or run the OpenClaw onboard command as follows *(recommended)* + +```bash +openclaw onboard +``` + +and complete the interactive setup with LM Studio as your model provider + +You can do the onboarding in non-interactive way by using the following command: + +```bash +openclaw onboard \ + --non-interactive \ + --accept-risk \ + --auth-choice lmstudio \ + --custom-base-url http://localhost:1234/v1 \ + --lmstudio-api-key "$LM_API_TOKEN" \ + --custom-model-id qwen/qwen3.5-9b +``` + +```lms_protip +Use a model (and server/model settings) with more than ~50k context length. Tools like OpenClaw can consume a lot of context. +``` + +### 3) Set up LM Studio as default memory search provider + +To use LM Studio as the embedding model provider for memory search, run the following command and restart openclaw gateway + +```bash +openclaw config set agents.defaults.memorySearch.provider lmstudio +openclaw gateway restart +``` + +If you're running into trouble, hop onto our [Discord](https://discord.gg/lmstudio)