A native control room for the OpenClaw AI gateway.
Monitor, trace, chat, and manage your agent — from your phone or Mac.
Hi, I'm Parham — Manchester-based software developer with 12+ years of experience. Technical Lead at Kitman Labs by day, OpenClaw and AI enthusiast by night. Manchester, UK
I've been deep in AI for the last three years, and OpenClaw genuinely impressed me — it was the missing piece for automating my workflows and being dramatically more productive. Here's one of my earlier cron schedules in Google Calendar (it's much crazier now):
But as a technical person myself, I found the onboarding, setup, and control UI wasn't OpenClaw's best feature. The engine, the brain, how it works — that's extraordinary. The UX? Not so much.
Swift and iOS are my specialty, so I built this. The main reasons:
- Tracing — cron runs get crazy in logs. Being able to drill down to any trace step and ask the agent to investigate a warning or error should be much easier than the web control UI
- Comments on everything — see a memory file that needs updating? Comment on a paragraph. See a trace step that looks wrong? Comment on it. The agent reads your comments and acts. This is the missing piece
- Mobile-first control — pull down to refresh, tap to investigate, chat with your agent while on the go
Watch the 3-minute demo on Google Drive
Core cards: System Health (ring gauges, 15s polling), Commands (12 quick actions with parsed output + AI investigation), Cron Summary, Token Usage (charts, pipeline attribution). Optional cards (Outreach Stats, Blog Pipeline) appear automatically if the gateway provides those endpoints — hidden gracefully otherwise.
Full job list with status badges. Segmented Cron Jobs / History. 24-hour schedule timeline. Detail view with: purpose, model, schedule, stats (avg duration, tokens, success rate), paginated run history. One-tap "Investigate with AI" on errors.
Step-by-step agent traces: system prompts, thinking, tool calls, tool results, responses. Metadata pills (model with provider icon, tokens). Comment on any step — queue comments, batch submit, agent investigates with full session context.
Browse all workspace files. Paragraph-level markdown viewer with Figma-style comments — annotate paragraphs, submit to agent for edits. Skills: browse folder trees, read SKILL.md with comments, view scripts/config read-only. Skill-level comments instruct agent to read create-skill best practices first. Maintenance actions: Full Cleanup, Today Cleanup.
SSE streaming chat with the orchestrator agent. Session-bound (server manages history). Chat bubbles with markdown, timestamps, copy. Auto-scroll, stop button, interactive keyboard dismiss.
Main session hero card with context window ring gauge. Subagent list. Both link to execution traces.
Custom parsed views for: Tail Logs (level-filtered structured entries), Security Audit (severity badges, collapsible findings with fixes), Doctor (collapsible sections, status lines), Status (table sections), Channel Status (probe cards). Raw monospace fallback for others.
Models & Config (provider icons, fallbacks, aliases). Channels (status dots, provider usage bars). Tools & MCP (native tool groups, MCP server detail with lazy-loaded tool lists). All 15 exec commands.
The app communicates with your gateway through a stats server skill that needs to be installed first. This skill exposes the /stats/* endpoints and /stats/exec commands that the app depends on.
Just ask your agent:
"Set me up for the iOS app"
The skill-ios-setup skill will detect your environment, deploy the stats server, configure auto-restart, and walk you through exposing your gateway (nginx, Tailscale, or local network).
Install it first if you don't have it:
openclaw skills install skill-ios-setupAvailable on ClawHub — search
skill-ios-setup.
The skill provides:
GET /stats/system— system health (CPU, RAM, disk)GET /stats/tokens— token usage analyticsPOST /stats/exec— allowlisted command execution (doctor, logs, status, etc.)- All the admin commands (models-status, channels-list, tools-list, etc.)
Without this skill, only
/tools/invokeand/v1/chat/completionsendpoints will work. The dashboard cards and commands will show errors.
Add these settings to your openclaw.json:
{
"tools": {
"sessions": { "visibility": "all" },
"profile": "full",
"allow": ["exec", "cron", "gateway", "sessions_list", "sessions_history", "memory_get"]
},
"gateway": {
"http": {
"endpoints": {
"chatCompletions": { "enabled": true }
}
}
}
}- Clone this repo
- Open
OpenClaw.xcodeprojin Xcode 16+ - Build and run on a simulator or device (iOS 17+)
- On first launch, enter your gateway URL (e.g.
https://your-server.com:18789) and Bearer token - The dashboard loads automatically — pull down to refresh
All communication is direct between your phone and your gateway — no third-party servers, no telemetry, no data collection. Your Bearer token is stored in the iOS Keychain (never in UserDefaults or iCloud). The app makes authenticated HTTPS requests only to the gateway URL you configure. No one else sees your data.
Clean Architecture with MVVM per feature. 135 files, ~11,000 lines.
View → LoadableViewModel<T> → Repository protocol → GatewayClientProtocol → URLSession
↓
MemoryCache (actor, TTL)
- Swift 6 concurrency:
@Observable,@MainActor, strictSendable - Design system:
Spacing,AppColors,AppTypography,AppRadius,Formatters - Shared components:
ModelPill,ProviderIcon,DetailTitleView,CommentSheet,CommentInputBar,CopyButton,ElapsedTimer,TokenBreakdownBar - One external dependency: MarkdownUI
See CLAUDE.md for the full architecture guide, conventions, and API gotchas.
All requests go to your configured gateway URL with Authorization: Bearer <token>.
| Method | Path | Purpose |
|---|---|---|
| GET | /stats/system |
CPU, RAM, disk, uptime |
| GET | /stats/tokens?period= |
Token usage with model breakdown |
| POST | /stats/exec |
Run allowlisted commands |
| POST | /tools/invoke |
Gateway tool calls (cron, sessions, memory) |
| POST | /v1/chat/completions |
Chat streaming (SSE) + agent prompts |
Full command list
Action commands: doctor, status, logs, security-audit, backup, channels-status, config-validate, memory-reindex, session-cleanup, plugin-update
Workspace commands: memory-list, skills-list, skill-files, skill-read
Admin commands: models-status, agents-list, channels-list, tools-list, mcp-list, mcp-tools
100% of the code in this repository was generated by AI (Claude Code). Every file, every view, every parser — written through conversation, not by hand. The architecture, patterns, and conventions were designed collaboratively but the implementation is entirely AI-authored.
- iOS 17+ — tab-based navigation, haptic feedback, interactive keyboard dismiss
- macOS 14+ — sidebar navigation, native clipboard, resizable window (min 800x500)
Same codebase, same features. Platform differences handled with #if os(iOS) / #if os(macOS) guards (~30 lines total).
If this project gets enough traction, the long-term plan is to migrate to Kotlin Multiplatform (KMP) for shared data and business logic layers, expanding to more platforms:
- iOS + macOS — SwiftUI (current, shipping)
- Android — Jetpack Compose
- Shared — Kotlin Multiplatform for networking, repositories, DTOs, and business logic
The memory_search tool is available via /tools/invoke but requires an embedding provider (OpenAI, Google, Voyage, or Mistral API key) to be configured on the server. Once enabled, semantic search can be added to the Memory tab.
Contributions are welcome. Please open an issue first to discuss what you'd like to change.
MIT
Built with Claude Code by Parham