Open-source enterprise AI workforce platform — Roles · Skills · Tools · Security · Scheduling, all in one place
English | 中文
LinkWork is an open-source enterprise AI agent platform that orchestrates containerized AI workforces on Kubernetes. Define roles, assign declarative skills, connect MCP (Model Context Protocol) tools, enforce security policies — and let multi-agent teams run autonomously in isolated containers.
You can run it like a company: create roles, equip each role with skills, authorize available tools, set security policies, arrange task schedules — then let your AI workers run 24/7 in their own isolated containers, track progress in real time, and automatically intercept high-risk operations for human approval.
Not a chatbot. Not a personal assistant. An enterprise-grade AI team management system.
LinkWork 是一个开源的企业级 AI Agent 平台,在 Kubernetes 上编排容器化的 AI 劳动力。声明式定义岗位、装配技能、接入 MCP(Model Context Protocol)工具、执行安全策略 — 让多 Agent 团队在隔离容器中自主运行。
你可以像经营一家公司一样管理 AI:设立岗位,为每个岗位装配技能,授权可用的工具,设定安全策略,安排计划任务 — 然后让 AI 员工在各自独立的容器中 7x24 运行,实时追踪进度,高风险操作自动拦截审批。
不是一个聊天机器人,不是一个个人助手,而是一个企业级的 AI 团队管理系统。
Before paying AI a salary, give it a role, a skill set, and a security policy.
An AI worker isn't a process running on the host machine. Each AI worker runs in an independent Docker / K8s container with:
- Isolated execution environment — Filesystem, network, and processes fully isolated between workers
- Dedicated resource quotas — CPU and memory allocated on demand, preventing any single worker from crashing the cluster
- Persistent workspace — Task outputs, intermediate state, and long-term memory preserved across sessions
- Fixed skill configuration — Install capabilities like apps — they persist across restarts
- Policy-controlled command boundaries — Policy engine governs what each AI worker can and cannot execute
Manage your AI team like a microservice cluster — fully leveraging the K8s cloud-native ecosystem:
- Smart Scheduling — Atomic scheduling of multiple AI workers, priority-based resource allocation, queuing when busy, releasing when idle
- Execution Isolation — AI reasoning and command execution run separately with clear security boundaries
- Elastic Scaling — Auto scale up/down based on task volume, auto-release resources when idle
- Resource Governance — Per-role resource quotas, preventing any single worker from consuming excessive cluster resources
- Self-healing — Auto-restart on container crash, stale working directories auto-cleaned
LinkWork breaks down AI capabilities into three governable layers, managed like an App Store:
Role — A complete AI worker definition
Includes persona, job description, available Skills list, and tool permissions. Create a "Frontend Engineer" role, and any AI model instance can start working immediately.
Skills — Installable capability modules
Declaratively defined. Each Skill is independently versioned and injected into the container at build time with a pinned version. "Code Review", "Data Analysis", "Document Writing" are independent Skills, mix and match across roles.
MCP Tool — Standardized external capability access
Compatible with the Model Context Protocol standard. Database queries, API calls, file operations, browser control — all accessed through a unified tool bus with automatic proxy, auth, and metering.
Role → Skills → Tool — three decoupled layers, freely composable, access-controlled. Enterprise admins decide which roles can use which Skills and tools, rather than the AI installing whatever it wants.
Behind the marketplace is a complete supply chain governance system — unified management from build, versioning, discovery, to audit:
- Image Factory — Auto-build, security scanning; every role image is traceable and reproducible
- Skills Factory — Online editing, version management, team sharing, and usage analytics
- MCP Factory — Tool registration and discovery, health checks, auth, and usage metering
- Containerized Service Orchestration — Each AI worker runs in its own container, K8s-native scheduling with elastic scaling and self-healing
- AI Role Management — Define job responsibilities and capability boundaries; swap workers without changing roles
- Skills Marketplace — Declarative Skills, version-pinned and embedded at build time
- MCP Tool Bus — Compatible with MCP protocol standard, unified proxy, auth, and usage metering
- Task Orchestration & Real-time Tracking — Dispatch tasks, watch execution via WebSocket streaming, fully observable
- Security Approval Workflow — Risk-tiered policy engine, high-risk operations auto-intercepted, proceed only after human confirmation
- Scheduled Shifts — Cron-driven, AI workers execute on schedule without manual triggering
- Vector Memory — Long-term memory storage, cross-task knowledge accumulation and semantic retrieval
- Multi-model Support — Compatible with OpenAI API standard, freely switch underlying models
AI Agent success isn't just about model capability — execution environment determinism is equally decisive. LinkWork adopts a "One Role, One Image" paradigm: Skills, MCP tools, and security policies are all baked into the container image at build time. Runtime is read-only — no drift, no surprises.
Each role build triggers a complete assembly pipeline:
- Skills injection — Pull the corresponding version of Skills per role config, pin exact version into the image
- MCP config baking — Generate MCP tool descriptor files, written read-only into the image
- Security policy embedding — Security policy files packaged into the image, auto-loaded at startup
- Version snapshot recording — Record exact versions of every Skill and MCP tool, making builds fully reproducible
Config change → must rebuild the image. This is a deliberate design choice: every running AI worker's environment is fully predictable and reproducible.
At task startup, the runtime automatically assembles the execution context:
- Skills sync — Pre-installed Skills from the image are synced to the working directory and loaded via the standard path
- Git repo preparation — Auto-pull code to the task branch per config; AI workers operate directly in real code repositories
- Three-tier Prompt strategy — Platform Prompt + Role Prompt + User Soul, building complete task background
AI workers don't start from zero — they arrive on the job with full environment and context.
Skills configured but Git clone fails → build aborted. MCP configured but generation fails → build aborted. Never silently skipped — problems surface at build time, not when an AI worker runs with broken capabilities.
One Role, One Image: environment as code, versions pinned, builds reproducible, failures exposed early.
For enterprises, AI that can "do stuff" isn't enough — outputs must be deliverable, traceable, and constrained. LinkWork treats output guarantees as a first-class citizen:
Every task has a clear delivery mode:
- Git mode — Auto clone/checkout a working branch before task start, auto commit/push and create Merge Request after completion. Output is code, going through standard Code Review workflows
- OSS mode — Output files auto-archived to object storage, structured by
user_id/task_id, persistently accessible
Not a chat transcript — engineering deliverables ready to merge and deploy.
Every LLM call, every command execution, every tool request — all recorded with timestamps. What the AI did, which Skill it used, which tool it called — fully traceable, meeting compliance and audit requirements.
All AI behavioral intents must pass through multiple security layers, each inescapable:
- Deep Command Analysis — No simple string matching; the system understands command structure, independently evaluating each sub-command within complex commands. Nesting and disguises are identified and blocked
- AI Workers Are Completely Unaware of the Security Layer — The security proxy is invisible to AI; workers believe they're executing commands directly, fundamentally preventing AI from bypassing the security layer
- Privilege Separation — Security control processes and AI task processes run independently, invisible and inaccessible to each other
- Network Default-deny — AI workers have no external network access by default; only necessary service addresses are whitelisted on demand
- High-risk Operation Approval — Dangerous commands are auto-intercepted; proceed only after human confirmation, timeout defaults to deny
Enterprises don't need AI that "probably works" — they need deliverable, auditable, constrained engineering productivity.
graph TB
User["User / API"]
Web["linkwork-web<br/>Frontend"]
Server["linkwork-server<br/>Core Engine"]
Skills["Skills Engine<br/>Declarative Skills · Version Pinning · Build-time Embed"]
Gateway["linkwork-mcp-gateway<br/>MCP Tool Proxy"]
SDK["linkwork-agent-sdk<br/>Agent Runtime"]
Executor["linkwork-executor<br/>Secure Executor"]
LLM["LLM Services<br/>OpenAI / Private Models"]
Tools["MCP Tool Ecosystem"]
K8s["K8s Cluster<br/>Orchestration · Isolation"]
User --> Web
Web -->|"REST / WebSocket"| Server
Server -->|"Task Dispatch"| SDK
Server -->|"Skill Orchestration"| Skills
Server -->|"Tool Routing"| Gateway
Server -->|"Container Mgmt"| K8s
Skills -->|"Capability Injection"| SDK
SDK -->|"LLM Calls"| LLM
SDK -->|"Command Exec"| Executor
Gateway --> Tools
K8s -.->|"Runtime Env"| SDK
K8s -.->|"Runtime Env"| Executor
How it works: User creates a task → Core engine allocates a container in the K8s cluster → Agent runtime starts in an isolated environment → Calls LLM for reasoning, securely executes commands through the executor → MCP gateway proxies external tool calls → Execution status streams back in real time.
Projects like OpenClaw are excellent personal AI assistants — running on your laptop, one Agent handling your daily tasks. LinkWork addresses a different level of the problem:
| Personal AI Assistants (e.g. OpenClaw) | LinkWork | |
|---|---|---|
| Positioning | Personal productivity tool | Enterprise workforce platform |
| Scale | Single user, single Agent | Multi-team, multiple AI workers in parallel |
| Runtime Env | Local single machine | K8s cluster, container isolation |
| Capability Mgmt | Community plugins, self-install | Role → Skill → Tool, three-tier governance |
| Security | Relies on user discretion | Approval workflow + policy engine + audit |
| Deployment | npm install -g |
K8s |
| Skills Reuse | Personal accumulation, hard to share | Skills proven on personal tools migrate directly in, shared across teams, reliably executed |
Personal assistants solve "my productivity". LinkWork solves "organizational effectiveness". Skills you've refined on personal tools can go straight into LinkWork, becoming standardized capabilities your entire team can use.
| Component | Description | Status |
|---|---|---|
| linkwork-server | Core backend — task scheduling, role management, approvals, Skills & tool registry | Coming soon |
| linkwork-executor | Secure executor — in-container command execution, policy engine | Coming soon |
| linkwork-agent-sdk | Agent runtime — LLM engine, Skills orchestration, MCP integration | Coming soon |
| linkwork-mcp-gateway | MCP tool gateway — tool discovery, auth, usage metering | Coming soon |
| linkwork-web | Frontend reference — task dashboard, role config, Skills marketplace | Coming soon |
LinkWork follows a phased open-source strategy, ensuring each component is independently usable and well-documented:
| Phase | Components | Description | ETA |
|---|---|---|---|
| Phase 1 | linkwork-server | Backend core with full scheduling engine and demo launcher | Late March 2026 |
| Phase 2 | linkwork-executor + linkwork-agent-sdk | Execution layer — secure executor + Agent runtime | Late March 2026 |
| Phase 3 | linkwork-mcp-gateway + linkwork-web | Access layer — MCP tool gateway + frontend reference implementation | End of March 2026 |
All components are planned to be fully open-sourced before April 1, 2026. Watch this repo for updates.
| Document | Description |
|---|---|
| Quick Start | Prerequisites, cloning submodules, launching platform services |
| Deployment Guide | K8s production deployment, Harbor, MySQL, Volcano |
| Extension Guide | Custom roles, Skills, MCP tools, file management, Git projects |
| Workstation Model | Role → Instance → Task model |
| Skills System | Declarative skills, version pinning, build-time injection |
| MCP Tools | Standardized external tool access |
| Harness Engineering | One role, one image |
| Architecture Overview | System context, components, tech stack |
| Example: Literature Tracker | Complete role configuration walkthrough |
Full documentation index: docs/README.md
All components are planned to be fully open-sourced before April 1, 2026. If you're interested in enterprise AI workforce management:
- Star this repo to track progress
- Watch for release notifications
- Feel free to share ideas and suggestions in Issues
LinkWork — Not just an AI assistant. An AI team.