A modern Home Assistant UI for ESPHome — works just as well for three devices as it does for a hundred. Compile, flash, edit, and track every ESPHome device with config history you can roll back, a live job queue, and a UI that actually shows you what's happening.
The built-in ESPHome Device Builder does the job — here are the things you get on top of it:
- Undo-able config edits. Every save is a
git commitbehind the scenes, so you get per-file history, a side-by-side diff viewer, and a one-click Restore on any previous version — right next to the Edit button. - Device table. Online status, firmware version, IP, WiFi/Ethernet, matched Home Assistant entity, last-compiled time, project name — sortable, filterable, deep-linked to the device's HA page. The stock dashboard is a vertical list with state hidden behind hover; this is a spreadsheet.
- A compile queue you can actually see. Watch jobs run, tail live build logs, download the compiled
.bin, filter by state, retry a failure. Every compile — even the ones that succeeded months ago — stays in a searchable history drawer with the last 8 KB of log inline. Handy when you're trying to remember when that regression started. - Offload compilation. Spin up one or more build workers — small Docker containers on whatever's fastest in the house (gaming PC idle overnight, a NAS, a small server, laptop). They install themselves via a one-line snippet generated by the UI and pull jobs from the add-on over HTTP, so the machine needs no open inbound ports. The built-in local worker also works; remote workers are a choice, not a requirement.
- Firmware archive, automatically. Every successful compile keeps its binary on the server — not just the ones you marked "download only". Flash it by hand later, bisect a regression, rescue a device with a broken OTA path. The Download button is on every row, including the history drawer.
- Pin an ESPHome version to one device. Beta-test a new ESPHome release on your garage sensor without upgrading the rest of the house. Hold a picky device back on a known-good version indefinitely. The stock dashboard compiles with whatever ESPHome it was installed with.
- Scheduled upgrades. Upgrade the office lights every Sunday at 3am. One-time "upgrade this device tomorrow at 8pm when nobody's home". The schedule lives in the device YAML so it travels with your config and respects the pin.
- Tags + routing rules. Tag devices and workers however makes sense for your fleet (
kitchen,production,ratgdo,os:windows, …) and write rules like "compile anyratgdo-tagged device on awindows-tagged worker" — the queue surfaces a clear BLOCKED state with the rule name when no eligible worker is online. Tags also drive bulk filtering and a tag-expression mode in the Upgrade modal that lets you pick build workers without naming each one. - Bulk operations for larger fleets. Upgrade every outdated device tonight. Rebuild the whole fleet against a new ESPHome release. Pin half your devices to a known-good version while the rest move forward. Bulk archive, bulk tag, bulk rerun-all-failed.
Three moving parts:
┌──────────────┐
┌─────┤ Worker 1 ├───► ESP devices
Home Assistant │ └──────────────┘
┌──────────────────┐ │ ┌──────────────┐
│ ESPHome Fleet │◄───┼─────┤ Worker 2 ├───► ESP devices
│ (add-on) │ │ └──────────────┘
└──────────────────┘ │ ┌──────────────┐
└─────┤ Worker N ├───► ESP devices
└──────────────┘
workers poll the add-on over HTTP for jobs (bearer token auth)
and push firmware OTA directly to ESP devices
- The add-on runs inside Home Assistant. It owns the device list, the job queue, the web UI, and coordinates everything. Workers never talk to each other; everything flows through here.
- Build workers are small Docker containers that do the actual compiling. Each worker polls the add-on over HTTP asking for work — it's authenticated with a bearer token, so the add-on doesn't need open inbound ports on the worker. Workers decide which ESPHome version a job needs (from the global default + any per-device pin), install that version into a local venv on first use, and keep a small LRU cache of the most recent versions so subsequent jobs start instantly. The add-on ships with a built-in local worker so you don't need any remote hardware to get started — just increase its slot count in the Workers tab.
- ESP devices on your network receive firmware OTA-style, directly from the worker that built it — same mechanism as the stock ESPHome dashboard, just triggered from this UI. The worker needs network reach to the device; the add-on itself does not.
Or manually:
- Settings → Add-ons → Add-on Store → ⋮ (top right) → Repositories and add
https://github.com/weirded/distributed-esphome. - Find ESPHome Fleet in the store and click Install.
- Start the add-on. Open the web UI from the HA sidebar.
Home Assistant also auto-discovers the add-on and offers to add the companion integration — accept that notification once to get the device, sensor, and update entities into HA.
You don't need one to start — the add-on ships with a built-in local worker that runs inside the HA host with one build slot out of the box. If local compiles feel slow, raise the slot count for local-worker on the Workers tab or add a remote worker on a faster machine:
- Open the Workers tab and click + Connect Worker.
- Pick Bash, PowerShell, or Docker Compose depending on the target machine.
- Copy the generated snippet (the snippet bakes in your server URL + auth token, nothing to edit) and run it on the target machine. That's it — the worker registers and shows up in the Workers tab within a few seconds.
The worker container is ghcr.io/weirded/esphome-dist-client:latest. All it needs on the host is Docker and network reach to (a) the add-on's HTTP API and (b) the ESP devices it'll flash. No inbound ports.
The worker's Python source updates itself from the server whenever the add-on upgrades (so bug fixes to client code reach remote machines automatically). Its Docker image doesn't — when the image needs refreshing (system packages, Python version, pinned dependencies), the Workers tab flags it with an Image stale badge and you refresh it on the worker host with docker pull ghcr.io/weirded/esphome-dist-client:latest && docker restart <name>. DOCS.md has the longer explanation.
Only if you're running the server outside Home Assistant. Most people don't need this.
docker run -d \
--name esphome-fleet-server \
--network host \
-v /path/to/esphome/configs:/config/esphome \
-v esphome-dist-data:/data \
-e SERVER_TOKEN=choose-a-random-string \
ghcr.io/weirded/esphome-dist-server:latestThe UI is at http://your-host:8765. --network host is required so the server can discover ESP devices over mDNS.
For pre-release builds, use :develop instead of :latest — that tag updates whenever a new development build is published, so it isn't meant for production.
The server auto-detects its deployment shape via the SUPERVISOR_TOKEN env var (set by HA when running as an add-on, absent otherwise) and logs the result on startup (Running in standalone mode (no HA Supervisor detected) vs Running as HA add-on (Supervisor detected)). Override with HA_MODE=standalone or HA_MODE=addon if auto-detection is wrong for your environment.
Full functionality in standalone: compile queue, build workers, OTA upgrade, firmware archive + download, device poller + mDNS discovery, live device logs, git versioning of /config/esphome/, in-browser YAML editor with autocomplete, scheduled upgrades, settings drawer, entire Web UI.
Unavailable without HA (all fail-soft — server keeps running; the relevant UI affordances disable or return a 503 with a hint):
- HA auto-discovery of the companion custom integration (obvious — there's no HA to be discovered by).
- The Devices-tab "HA connectivity" column (reads HA's entity registry).
- The Restart button's HA-service fallback (native-API restart still works; the fallback is only reached when the device is offline).
- Supervisor-driven ESPHome version auto-detection. In standalone the server falls back to "latest stable from PyPI" on first boot; change via the version dropdown in the Web UI header.
Configuration differences:
- Settings live in the server's data volume instead of Supervisor options. Edit via the in-app Settings drawer (gear icon, top-right).
require_ha_authdefaults to off for standalone Docker. If:8765is reachable from an untrusted network, turn it on in Settings → Authentication and hand out the bearer token; browsers without a token land on a styled 401 page explaining both recovery paths.
- Devices — every ESPHome config you have. One-click Upgrade on any row; Upgrade dropdown for bulk actions (upgrade all outdated, upgrade changed, upgrade selected, upgrade everything). Edit YAML inline with autocomplete and validation. Pin a device to a specific ESPHome version. Tag devices for routing + filtering. Open a live device log. Ping a device or install to a specific address from the row menu. View the fully-rendered config (with
!secretsubstituted,packages:flattened) before pushing. Deep-link to the HA device page. Toggle archived rows in-line via the column picker. - Queue — what's compiling, what just finished, what failed, what's queued. Live build logs. Inline Rerun · Clear · Log on every row; Cancel, Download firmware, Edit YAML, and the full Devices-tab Device-section actions live behind the per-row hamburger.
- Workers — the workers you have connected, their platform, online status, build slots, current job, disk usage vs. quota, and tags. One-click + Connect Worker generates the
docker run/docker composesnippet for adding a new one. Routing rules… opens a builder for fleet-wide job-routing rules backed by device + worker tags. - Schedules — every scheduled upgrade across your fleet in a single view. See what's due next, when it last ran, whether the run succeeded.
Dark/light theme toggle + a "streamer mode" that blurs tokens and secrets for screen-sharing are in the header.
More in DOCS.md — configuration options, image signature verification, and the less-common operational workflows.
MIT.

