A lightweight system metrics collector with a live dashboard. Monitors CPU, RAM, disk, network, battery, and connectivity. Supports InfluxDB 1.8, InfluxDB 2.7, and Elasticsearch as storage backends.
Prerequisites: Docker & Docker Compose
git clone https://github.com/pushpitkamboj/HostWatch.git
cd HostWatchPick a backend and run:
make up-v1 # InfluxDB 1.8
make up-v2 # InfluxDB 2.7
make up-es # ElasticsearchOpen the dashboard at http://localhost:8050
| Command | Description |
|---|---|
make up-v1 |
Start with InfluxDB 1.8 |
make up-v2 |
Start with InfluxDB 2.x |
make up-es |
Start with Elasticsearch |
make down-v1 / down-v2 / down-es |
Stop the stack |
make logs-v1 / logs-v2 / logs-es |
Follow collector + dashboard logs |
make query-v1 |
Open InfluxDB 1.8 CLI |
make query-v2 |
Query InfluxDB 2.x via CLI |
make query-es |
Show latest Elasticsearch documents |
make rebuild-v1 / rebuild-v2 / rebuild-es |
Rebuild and restart |
make ps |
Show running containers |
make clean |
Stop all and remove volumes |
The dashboard auto-refreshes every 30 seconds and includes:
- Summary cards: latest CPU, RAM, Disk, and connectivity status
- Time-series charts: CPU load, RAM usage, network I/O, disk usage, battery, connectivity RTT
- Query Explorer: run raw queries (InfluxQL, Flux, or ES Query DSL) and see results in a table
| Metric | Fields |
|---|---|
cpu |
load_percent_avg, cpu_count |
ram |
total_bytes, used_bytes, available_bytes, used_percent |
disk |
total_bytes, used_bytes, free_bytes, used_percent |
network |
bytes_sent, bytes_recv, packets_sent, packets_recv |
battery |
percent, power_plugged |
connectivity |
is_alive, avg_rtt_ms, packet_loss |
All metrics include a host tag for multi-host deployments.
All configuration is done through environment variables. Every variable has a default value, so the project works out of the box with zero configuration. To customize, create a .env file in the project root or pass variables inline.
| Variable | Default | Description |
|---|---|---|
TSDB_BACKEND |
influxdb1 |
Which database to use: influxdb1, influxdb2, or elasticsearch |
COLLECTION_INTERVAL |
10 |
Seconds between metric collections |
HOST_TAG |
docker-host |
Tag to identify the host in stored metrics |
| Variable | Default | Description |
|---|---|---|
INFLUXDB_HOST |
influxdb1 |
InfluxDB hostname |
INFLUXDB_PORT |
8086 |
InfluxDB port |
INFLUXDB_NAME |
hostwatch |
Database name |
| Variable | Default | Description |
|---|---|---|
INFLUXDB2_URL |
http://influxdb2:8086 |
InfluxDB 2.x URL |
INFLUXDB2_BUCKET |
hostwatch |
Bucket name |
INFLUXDB2_ORG |
hostwatch |
Organization |
INFLUXDB2_TOKEN |
hostwatch-secret-token |
Auth token |
| Variable | Default | Description |
|---|---|---|
ES_HOST |
elasticsearch |
Elasticsearch hostname |
ES_PORT |
9200 |
Elasticsearch port |
ES_SCHEME |
http |
Connection scheme |
ES_INDEX |
hostwatch |
Index name |
Only the variables for your chosen backend matter. The rest are ignored at runtime.
┌──────────────────┐ ┌──────────────────┐
│ Collector │ │ Dashboard │
│ (psutil/icmplib)│ │ (Plotly Dash) │
└────────┬─────────┘ └────────┬─────────┘
│ │
│ batch_write() │ get_metrics() / query()
│ │
▼ ▼
┌─────────────────────────────────────────────────┐
│ TSDBClient (Abstract Base Class) │
│ (registry.py) │
│ │
│ Unified interface: batch_write, get_metrics, │
│ query, create_database, delete_metric_data, │
│ create_or_alter_retention_policy, drop_database│
└──────────┬──────────────┬──────────────┬────────┘
│ │ │
▼ ▼ ▼
┌────────────┐ ┌────────────┐ ┌──────────────┐
│InfluxDB 1.8│ │InfluxDB 2.7│ │Elasticsearch │
│ (InfluxQL) │ │ (Flux) │ │ (Query DSL) │
└────────────┘ └────────────┘ └──────────────┘
The core abstraction is the TSDBClient abstract base class defined in registry.py. It declares seven abstract methods that every backend must implement. If any method is missing, Python raises TypeError at instantiation, not later when the collector or dashboard calls the missing method. The collector and dashboard never import or reference any specific database client. They call get_client() from the registry, which reads TSDB_BACKEND and returns the matching implementation.
This means:
- The collector calls
batch_write()to store metrics. It doesn't know if it's writing InfluxDB line protocol or Elasticsearch bulk documents. - The dashboard calls
get_metrics()for its built-in charts andquery()for the Query Explorer. Each backend builds queries in its native language (InfluxQL, Flux, or Query DSL) internally. - Adding a new backend only requires subclassing
TSDBClient, implementing the seven methods, and registering it, zero changes to the collector or dashboard.
For a detailed breakdown of how each method behaves across all three backends, see docs/backend-implementation-guide.md. For how this maps to OpenWISP Monitoring's backend system, see docs/mapping-to-openwisp-monitoring.md.
- Create
src/database/yourdb/client.py - Subclass
TSDBClientand implement all seven abstract methods:create_database,batch_write,query,get_metrics,delete_metric_data,create_or_alter_retention_policy,drop_database - Register it in
src/registry.py:
@register_backend("yourdb")
def _create_yourdb():
from src.database.yourdb import YourDBClient
return YourDBClient()Python will refuse to instantiate your client if any abstract method is missing, so you get immediate feedback.
HostWatch/
├── src/
│ ├── collector.py # Main collection loop
│ ├── registry.py # TSDBClient ABC + backend registry
│ ├── system_metrics.py # Metric collectors (psutil, icmplib)
│ ├── models.py # Pydantic data models
│ ├── database/
│ │ ├── influxdb1/ # InfluxDB 1.8 client
│ │ ├── influxdb2/ # InfluxDB 2.7 client
│ │ └── elasticsearch/ # Elasticsearch client
│ └── dashboard/
│ └── app.py # Plotly Dash dashboard
├── docker-compose.yml
├── Dockerfile.collector
├── Dockerfile.dashboard
├── Makefile
└── README.md
MIT