kview is a local, single-binary Kubernetes UI for fast, view-first cluster exploration. It runs entirely on your machine — no cloud service, no agent installation, no cluster-side components required.
- Single binary, zero install. Drop the binary on your machine and point it at your kubeconfig. Embed auth plugins on
PATHif your contexts use them; nothing else is needed. - Honest, truthful read metadata. Every list response carries
freshness,coverage,degradation,completeness, and coarsestateso you know exactly what you are looking at, not just a stale table with no indication of when it was last read. - Deep cross-resource navigation. Drawer-based inspection with nested drawers, cross-resource links, and related-resource panels let you follow a signal from a dashboard alert through to a pod log or config map without leaving the UI.
- RBAC-aware throughout. Capability checks gate every action button and gracefully degrade list and detail views when permissions are limited. Derived projections such as node workload rollups from cached pod snapshots remain useful even when direct node reads are denied.
- Predictable operator workflows. The cluster dashboard, namespace summaries, and signals panels are designed for triage. Signals carry stable identity, advisory text, and filter keys so you can drill from a cluster-wide view into a specific namespace and then into the exact resource.
- Smart background reads. A scheduler-mediated dataplane handles list snapshot TTLs, deduplication, priority queuing, and partial/degraded responses. The UI refreshes in the background; you do not need to manually poll.
- Custom commands and actions. Define container command presets (run on matching pod containers) and workload action presets (set/unset env, set image, raw JSON patch) from the Settings view without touching the binary.
Pre-built binaries for Linux, macOS, and Windows are published on the GitHub Releases page for every v* tag.
Release binaries are built for browser/server modes. Desktop webview mode requires a local build with webview support; see Desktop webview mode.
Download the binary for your platform, make it executable, and run:
kviewThis starts the local server and opens the UI in your default browser.
To point kview at a specific kubeconfig file or directory:
kview --config ~/.kube/my-config--config overrides KUBECONFIG. If neither is set, kview uses the default ~/.kube/config.
kview uses client-go authentication from the selected kubeconfig. If a context uses an exec auth plugin, the referenced command (e.g. kubectl, kubelogin, a cloud-provider CLI) must be installed and available on PATH where kview runs.
On Windows, running kview from WSL is the simpler path because kubeconfig paths, shell behavior, and auth helper commands tend to match the Linux-native Kubernetes tooling setup more closely.
If you have Go installed, you can install kview directly from the module:
go install github.com/korex-labs/kview/v5/cmd/kview@latestThis places the kview binary in your Go install bin directory, usually $(go env GOPATH)/bin or $(go env GOBIN) if set. Make sure that directory is on your PATH, then run:
kviewThe default Go install path builds browser/server modes. Desktop webview mode requires the webview build tag; see Desktop webview mode.
The /v5 path is required by Go's semantic import versioning. Using the unsuffixed module path can make Go ignore current v5 tags and fall back to an older v1 tag.
To enable the local release-tag guard, run:
make install-git-hooksTo create a guarded release tag, run:
make release-tag TAG=v5.4.0Git does not provide a native pre-tag hook, so the helper validates before creating the tag. The installed pre-push hook also blocks pushing manually created tags such as v6.0.0 unless go.mod declares the matching /v6 module path and prints the migration steps to fix it.
Desktop webview mode is only available in binaries built with the webview build tag. Release binaries are built without it.
To build kview with Linux webview support through the pinned Docker toolchain:
make build-webviewThen run:
./kviewWebview-enabled builds use webview as the default launch mode. You can also request it explicitly:
./kview --mode webviewThis runs the same embedded HTTP server and UI inside a native desktop webview window instead of opening a browser tab.
make buildThis produces a regular browser/server-mode binary through the pinned Docker toolchain. To include Linux desktop webview support, use:
make build-webviewRelease-style artifacts:
make build-release GOOS=linux GOARCH=amd64 OUTPUT=dist/kview-linux-amd64make, make check, make build, make build-webview, and make build-release all run through the pinned Docker toolchain by default and keep Go/npm build caches under .cache/, so local rebuilds reuse dependency artifacts without requiring a host Go or Node.js installation.
The local-* Makefile targets are implementation details for the Docker container or explicit maintainer debugging. AI coding agents must not call host go, npm, node, or local-* targets unless the project owner explicitly asks for a host-toolchain exception.
Run Go lint checks through the pinned Docker toolchain:
make lint-goThis runs golangci-lint with a practical baseline (govet, staticcheck, errcheck, unused, ineffassign, and gofmt checks).
- Dense resource tables with filtering and sorting across all standard Kubernetes resource kinds
- Drawer-based detail inspection with YAML, events, related resources, and status-focused summaries
- Guarded inline YAML editing on supported resources with validation, typed confirmation, and conflict-aware live apply
- Nested drawers and cross-resource navigation
- Capability-aware action buttons: delete, restart, scale, RBAC operations, Helm operations, and custom workload patches
- Cluster-wide summary with namespace and node snapshot blocks, resource totals, and attention signals
- Signals cover elevated pod restarts, stale Helm releases, abnormal jobs, quota pressure, empty ConfigMaps/Secrets, and low-confidence potentially unused PVCs and service accounts
- Each signal carries stable identity, severity, advisory text (
likelyCause,suggestedAction), and backend-provided quick-filter keys - Derived node workload rollups and Helm chart catalog rows from cached snapshots when direct reads are limited
- Projection-backed namespace summaries with workload health rollups, RBAC counts, Helm release list, and coverage metadata
- Namespace insights surface the exact signals for each ResourceQuota, PVC, Service, or Helm release by resource identity
- Partial/degraded payloads returned instead of hard-failing when only part of the namespace is visible
- Per-context snapshot stores with scheduler-mediated TTLs, deduplication, priority queuing, and bounded concurrency
- Namespace and node observers, idle-gated enrichment, and background sweep option for large clusters
- Optional local snapshot persistence in a bbolt file for stale fallback and quick-access search (
GET /api/dataplane/search) - All list responses include
freshness,coverage,degradation,completeness, andstatemetadata
POST /api/actions
Supported families: delete, restart, scale, selected workload and RBAC operations, Helm install/upgrade/uninstall. Handlers are registered in the backend ActionRegistry; the UI checks RBAC capabilities before surfacing each button.
- Terminal sessions, port-forward sessions, runtime/system status
- Namespace row enrichment progress and long-running dataplane snapshot activity
Browser-local settings profile (stored in localStorage, importable/exportable as JSON) controls:
- Dashboard refresh and initial Activity Panel state
- Smart-filter chip generation and scoped filter rules
- Custom container command presets and custom workload action presets
- Dataplane policy: snapshot TTLs, enrichment caps, observer intervals, scheduler budget, dashboard signal thresholds
Written in Go:
client-goKubernetes integration- REST API via
chi, embedded UI viago:embed - Generic mutation endpoint:
POST /api/actionswith centralActionRegistry - RBAC capability checks:
POST /api/capabilitiesandPOST /api/auth/can-i - Read-side dataplane: snapshots, scheduler, observers, projections
- Runtime activity system, terminal sessions, port-forward sessions, short-lived container exec
Built with React, Vite, TypeScript, and MUI. Uses shared resource list and drawer patterns, capability-aware actions, typed API responses, and reusable design tokens.
If you are an AI coding agent using this README as context, read these files before making changes:
- docs/AI_AGENT_RULES.md
- docs/AI_BOOTSTRAP_PROMPT.md
- docs/DEV_CHECKLIST.md
- docs/ARCHITECTURE.md
- docs/DATAPLANE.md
- docs/API_READ_OWNERSHIP.md
- docs/UI_UX_GUIDE.md
| Document | Purpose |
|---|---|
| docs/ARCHITECTURE.md | Product architecture and boundaries |
| docs/DATAPLANE.md | Read-side dataplane, snapshots, projections, metadata |
| docs/API_READ_OWNERSHIP.md | Route-by-route read ownership map |
| docs/UI_UX_GUIDE.md | UI architecture and UX contracts |
| docs/DEV_CHECKLIST.md | Review checklist for changes |
| docs/AI_BOOTSTRAP_PROMPT.md | Bootstrap context for executor agents |
| docs/AI_AGENT_RULES.md | Execution rules for AI-assisted development |
Documentation is a contract. Update it in the same change whenever architecture, read ownership, UI contracts, or operator-visible behavior changes.