Deploy OpenClaw AI agents to Nebius Cloud.
→ Open the Deploy UI at claw.moi
| Path | Method | Inference | Best For |
|---|---|---|---|
| 1. Local Install | npm install -g openclaw |
Token Factory | Try it now, zero overhead |
| 2. Docker | docker run pre-built image |
Token Factory | Portable, reproducible |
| 3. GPU Serverless | NemoClaw on Nebius GPU | Local model | Custom models, data privacy |
| 4. CPU Serverless | OpenClaw on Nebius CPU | Token Factory | Production, always-on |
Path 1 — Install locally (30 seconds):
npm install -g openclaw
export TOKEN_FACTORY_API_KEY={your-key}
openclaw init && openclaw gateway --bind loopback --auth tokenPath 2 — Docker (2 minutes):
docker run -e TOKEN_FACTORY_API_KEY={your-key} \
-e TOKEN_FACTORY_URL=https://api.tokenfactory.nebius.com/v1 \
-e INFERENCE_MODEL=zai-org/GLM-5 \
-e OPENCLAW_WEB_PASSWORD={your-password} \
-p 8080:8080 -p 18789:18789 \
ghcr.io/colygon/openclaw-serverless:latestPath 4 — Nebius CPU Serverless (3 minutes):
export TOKEN_FACTORY_API_KEY={your-key}
./install-openclaw-serverless.sh| Create Agent | Endpoints |
|---|---|
![]() |
![]() |
| Deploy UI | Browser-based deployment wizard with endpoint management |
| Install Scripts | One-command deploy to Nebius serverless |
| Docker Images | Pre-built public images on GHCR |
| Setup Guide | Comprehensive Nebius configuration guide |
docker pull ghcr.io/colygon/openclaw-serverless:latest # ~400 MB
docker pull ghcr.io/colygon/nemoclaw-serverless:latest # ~1.1 GB- nebius-skill — Claude Code skill for managing Nebius infrastructure
- OpenClaw — The open-source AI agent platform
- Token Factory — Nebius managed GPU inference API
MIT

