This repo hosts the Watering Controller system.
Local infrastructure via Docker Compose (current checked-in compose file):
- Start MQTT + Aspire Dashboard:
docker compose -f infra/docker-compose.yml up -d mqtt aspire-dashboard - MQTT broker:
localhost:1883 - Aspire Dashboard:
http://localhost:18888/
Backend + frontend dev (backend serves the frontend):
- Run backend:
dotnet run --project src/backend/WateringController.Backend.csproj - Open the app:
http://localhost:5291/
Optional full app container:
- The compose file includes an
appprofile-backed service. - Local run:
docker compose -f infra/docker-compose.yml --profile app up --build app
- The app is exposed on
http://localhost:8080/.
Container image build/push:
- Copy
infra/registry.local.env.exampletoinfra/registry.local.env. - Set local values:
CONTROLLER_IMAGEdefaults toregistry.monge.place/watering-controllerCONTROLLER_TAGdefaults tolatest- fill
REGISTRY_USERNAME/REGISTRY_PASSWORDonly when you plan to push
- Build locally:
.\tools\publish-controller-image.ps1
- Build with a specific tag:
.\tools\publish-controller-image.ps1 -Tag 2026.03.24
- Push to
registry.monge.place:.\tools\publish-controller-image.ps1 -Push
infra\registry.local.env is gitignored so local registry credentials stay out of the repo.
Testing:
- Backend unit tests:
dotnet test src/backend.tests/WateringController.Backend.Tests.csproj
- Frontend E2E tests (Playwright):
- Build:
dotnet build src/frontend.e2e/WateringController.Frontend.E2E.csproj - Install browsers (once):
pwsh src/frontend.e2e/bin/Debug/net10.0/playwright.ps1 install - Run:
dotnet test src/frontend.e2e/WateringController.Frontend.E2E.csproj
- Set
E2E_BASE_URLif not usinghttp://localhost:5291
- Build:
Common issues:
- Frontend build file locks (Defender/MSBuild): run
dotnet build-server shutdownand retry.
Notes:
- The backend serves the Blazor WASM frontend from the same container.
- The MQTT broker is Eclipse Mosquitto (anonymous, local dev only).
- Electronics diagrams and wiring:
docs/electronics.md - SQLite files are created in the backend working directory by default.
- MQTT topic prefix is configurable via
Mqtt__TopicPrefix(defaulthome/veranda). .squad/is shared repository state and should be committed when team rules, routing, decisions, or logs change. Only machine-local helpers should stay ignored.
Configuration reference:
- Mqtt:
Mqtt__Host,Mqtt__Port,Mqtt__UseTls,Mqtt__ClientId,Mqtt__Username,Mqtt__Password,Mqtt__KeepAliveSeconds,Mqtt__ReconnectSeconds,Mqtt__TopicPrefix - Database:
Database__ConnectionString - Safety:
Safety__WaterLevelStaleMinutes,Safety__AutoStopCheckIntervalSeconds - Scheduling:
Scheduling__CheckIntervalSeconds - OpenTelemetry:
OpenTelemetry__Enabled,OpenTelemetry__ServiceName,OpenTelemetry__ServiceVersion,OpenTelemetry__OtlpEndpoint,OpenTelemetry__ExportLogs,OpenTelemetry__ExportMetrics - DevMqtt (dev only):
DevMqtt__AutoStart - Env-only:
WATERING_DB_PATH(overrides database file path)
OpenTelemetry (logs + metrics):
- In Development, OpenTelemetry is enabled by default (
appsettings.Development.json). - Default OTLP endpoint is
http://localhost:4317(gRPC). - Start dashboard and broker together:
docker compose -f infra/docker-compose.yml up -d mqtt aspire-dashboard
- Example:
OpenTelemetry__Enabled=trueOpenTelemetry__OtlpEndpoint=http://localhost:4317OpenTelemetry__Site=home/veranda
Frontend OpenTelemetry (browser traces):
- Enabled by default in
src/frontend/wwwroot/appsettings.Development.json. - Sends traces to backend same-origin proxy
/api/otel/v1/traces(configured asOpenTelemetry:OtlpHttpEndpoint=/api/otelin frontend settings). - Backend forwards frontend traces to Aspire OTLP/HTTP (
OpenTelemetry:FrontendTraceProxyTarget, defaulthttp://localhost:4318/v1/traces). - Includes custom UI events for SignalR lifecycle and key actions on Control/Schedules pages.
- Fallback: frontend also posts event payloads to
/api/otel/client-eventso frontend activity is visible in backend logs if browser OTEL export fails. - Config keys:
OpenTelemetry:EnabledOpenTelemetry:ServiceNameOpenTelemetry:OtlpHttpEndpoint
Azure Application Insights (via OpenTelemetry Collector):
- Template collector config:
infra/otel-collector-config.azure.yaml - Requires env var:
APPLICATIONINSIGHTS_CONNECTION_STRING - Typical switch from Aspire Dashboard to Azure collector:
- Start collector using the template (see commented
otel-collector-azureservice ininfra/docker-compose.yml). - Point backend OTLP to collector:
OpenTelemetry__OtlpEndpoint=http://localhost:4317
- Keep frontend proxy target:
OpenTelemetry__FrontendTraceProxyTarget=http://localhost:4318/v1/traces
- Start collector using the template (see commented
Azure table mapping expectations:
Requests: ASP.NET Core incoming spans.Dependencies: outgoing HTTP spans and other dependency spans.Traces: structured backend logs and relayed frontend logs.Events: relayed frontend events are emitted as explicit frontend trace spans (event.name=...), and are queryable with consistent dimensions.
Canonical custom property keys (kept consistent across logs/traces/relayed events):
sitecomponentrequest.idhttp.routeschedule.iddevice.idsafety.reasonevent.nameevent.sourceevent.sent_at
These keys are centrally defined in src/backend/Telemetry/TelemetryDimensions.cs.
Database path override:
- Set
WATERING_DB_PATHto point at a mounted volume path when running in a container. - Example:
WATERING_DB_PATH=/data/watering.db
Firmware update (PlatformIO):
- Build + upload via USB:
.\tools\update-firmware.ps1 -Target pump.\tools\update-firmware.ps1 -Target level -Port COM5
- OTA (future):
.\tools\update-firmware.ps1 -Target pump -Mode ota -Port 192.168.1.50
Dev test utilities:
- /test (development only) publishes MQTT test messages.
- Test endpoints: /api/test/mqtt/waterlevel, /api/test/mqtt/pumpstate, /api/test/mqtt/systemstate, /api/test/mqtt/alarm, /api/test/mqtt/pumpcmd