Run PicoClaw securely without any dedicated Mac Mini, VPS, or GPU.
Brew your shrimp securely without breaking your bank
PicoClaw-WebTop gives you a fully functional PicoClaw personal AI assistant in your browser in under 3 minutes — no powerful PC, no Docker on your machine, no GPU required.
Just open this repo in a GitHub Codespace, and you get:
- A complete Ubuntu MATE desktop (WebTop)
- PicoClaw globally installed and ready to run
- ModelRelay built-in and preconfigured with PicoClaw
- Easiest configuration to your models via webui (eg: OpenAI or Gemini via built-in browser-oAuth)
- Persistent volume for your configurations with backup and restore
When you’re ready to go to production, simply move the same Docker setup to your own machine or VPS.
PicoClaw (the core project) is the ultra-lightweight Go version of OpenClaw, connecting LLMs directly to your WhatsApp, Telegram, Slack, Discord, etc. It runs fast, uses <10MB RAM, and can spawn sub-agents and give you a beautiful dashboard.
The only catch? You normally need a dedicated machine.
PicoClaw-WebTop removes that catch completely.
Perfect for:
- Trying PicoClaw risk-free
- Students / hackers / evaluators
- Anyone who wants to “brew their shrimp” securely on free LLM cloud credits / Gemini via Google Antigravity
-
Open this repository in a GitHub Codespace (big green “Code” button → Codespaces → New)

-
In the Codespace terminal (or your local environment if you are not using Codespace) run:
make start
(or
make start-locally-bakedif you prefer a pre-built image) -
Wait ~60 seconds. When the web desktop URL appears in the Codespace Ports tab, click it. For a localhost env, just go to http://localhost:3000.
-
Inside the WebTop desktop:
- PicoClaw's WebUI process is auto-started in terminal. (If not, start it using the PicoClaw desktop-icon)
- You should see the
dashboard tokenin the terminal. Login with the token viahttp://localhost:18800in WebTop - Go to Credentials > Google Antigravity and click on Browser OAuth to sign in with your Google account.
- Go to Models and set
gemini-flashas your default model (star it). - The
Start Gatewaybutton in the top right corner will be enabled. Click it to start. - Go to Chat and start chatting with Google Antigravity's Gemini!
- Alternative: you can use
modelrelaymodel to chat with any LLM provider that ModelRelay supports.
You now have a fully working PicoClaw instance running 100% in the cloud.
- Zero local install — everything runs in browser via GitHub Codespaces
- Free-tier friendly — from Google Antigravity's Gemini
- Persistent config — if docker volume backup and restore after Codespace recreation
- Easy backup/restore —
make backup/make restore - One-command everything — powerful Makefile + clean
docker-compose.yml - Auto-start Ollama — custom init script on WebTop boot (if you want to use cloud credits)
- Colima / local Docker support ready
- Built-in ModelRelay — intelligent API proxy for easy model switching
PicoClaw-WebTop comes pre-integrated with ModelRelay, a lightweight bridge that transforms various LLM providers into a standard, OpenAI-compatible API.
- Unified Interface: Interact with Google Gemini, Anthropic Claude, or any other provider using the standard OpenAI client format.
- Smart Routing: Pre-configured to use
openai/auto-fastest, automatically selecting the most responsive model for your task. - Zero-Touch Config: On first boot, WebTop automatically injects ModelRelay into PicoClaw's model list. You just select the
modelrelaymodel in the WebUI and start chatting. - Developer Friendly: Test different LLM backends without changing picoClaw's model config.
ModelRelay API runs on http://localhost:7352/v1 inside the WebTop environment and is automatically available to PicoClaw right out of the box. You can add more providers and models by using its webui at http://localhost:7352.
The WebTop URI is automatically protected — no one else can reach it.
GitHub Codespaces forwards ports privately by default (this is the setting the make start command uses). According to official GitHub documentation:
“All forwarded ports are private by default, which means that you will need to authenticate before you can access the port.”
“Privately forwarded ports: Are accessible on the internet, but only the codespace creator can access them, after authenticating to GitHub.”
- The URL you click in the Ports tab (
https://<your-codespace>-3000.app.github.dev) is guarded by GitHub authentication cookies. - These cookies expire every 3 hours — you’ll simply be asked to log in again (super quick).
- If someone tries to open the link in an incognito window, via curl, or from another computer without being logged into your GitHub account, they are redirected to the GitHub login page or blocked.
- You (and only you) can access the full Ubuntu desktop, the browser inside it, Ollama, PicoClaw launcher, and everything else.
- The entire environment runs in an isolated GitHub-managed VM — not on your laptop.
- Codespaces are ephemeral: delete the codespace and everything disappears (except the backed-up volume you control).
- TLS encryption is handled automatically by GitHub.
- The
GITHUB_TOKENinside the codespace is scoped only to this repo and expires when you stop/restart. - We never set the port to “Public” or even “Private to Organization” — it stays strictly private to you.
Bottom line: This is actually more secure for experimentation than running Docker locally on your personal machine (no accidental exposure, no firewall holes, no persistent processes on your hardware).
For production use we still recommend moving the same Docker image to your own VPS or server with additional hardening (firewall, HTTPS reverse proxy, strong secrets, etc.). This Codespace version is perfect for safe testing and development.
Your PicoClaw ID, device pairings, and configuration are persisted in a Docker volume.
The project includes convenient make targets to back up and restore this data in codespace:
make backup # creates backup/picoclaw_config_backup.tar.gz
make restore # restores from backup/picoclaw_config_backup.tar.gz- Migrating from GitHub Codespaces to a local machine or VPS
- Testing experimental changes without risking your current setup
- Quickly cloning your working environment into a fresh Codespace or container
- In your current environment, run
make backup. - Download the generated file:
backup/picoclaw_config_backup.tar.gz. - Place the file in the
backup/folder of the new environment. - Run
make restore.
💡 Tip: Always back up before making significant changes. The restore process will overwrite the existing volume data, so test in a separate environment first if you're unsure.
Run locally (no Codespaces)
make docker-build # especially if you modified the ./docker/Dockerfile
make start-locally-baked # start from your local bake image- GitHub Codespaces free tier has monthly limits (great for testing, less ideal for 24/7 as Codespace auto-shutdown during inactivity)
- Browser desktop has slight latency vs native (expected). You can shutdown your codespace and change to 4-core codespace to improve responsiveness or the need to run heavy applications.
- More screenshots + video demo
- Full pairing automation scripts
- Pre-built Docker image tags for stable releases
- Community templates (Telegram-only, WhatsApp-only, etc.)
- One-click “deploy to VPS” guide (Railway / Fly.io / cheap VPS) ?
This is a one-person weekend project right now — every star, issue, or PR helps enormously! Feel free to open issues for bugs or feature requests.
MIT — see LICENSE