Blocknet mining pool server in Rust (stratum + API + payouts).
The repo ships four runtime binaries from a Cargo workspace:
blocknet-pool-api: API/UI process onlyblocknet-pool-stratum: Stratum + payouts + maintenance process onlyblocknet-pool-monitor: on-host monitoring sampler + Prometheus endpointblocknet-pool-recoveryd: privileged local recovery agent for daemon cutover/sync/rebuild workflows
For production, prefer the split services so API/UI deploys do not drop Stratum connections.
Workspace layout:
apps/: service-specific app packagescrates/: shared library packages with explicit dependency boundariesfrontend/: embedded React/Vite UI for the API binary
From the local repo root:
./scripts/deploy_bntpool.shUse split-service migration only when moving from the legacy combined service or when you intentionally need to reinstall the systemd unit files:
./scripts/deploy_bntpool.sh --migrate-splitWhat it does:
- builds frontend bundle locally (unless
--skip-ui-build) - builds the split release binaries locally
- rsyncs pool source to
bntpool:/opt/blocknet/blocknet-pool - uploads the locally built binaries to the server
- restarts only the changed service(s):
blocknet-pool-api.serviceblocknet-pool-stratum.serviceblocknet-pool-monitor.service
- frontend-only changes still restart
blocknet-pool-api.servicebecause the UI bundle is embedded into that binary - tails recent logs for both services
The pool now ships a dedicated on-host monitor and repo-managed monitoring assets:
blocknet-pool-monitor.service: probes API, Stratum, Postgres, daemon, wallet, share freshness, and payout queue state every10s- DB-backed heartbeats and incidents drive
/api/status - Prometheus + Alertmanager + node exporter + blackbox exporter configs live under
deploy/monitoring/ - Cloudflare Worker assets for outside-in public HTTP probes live under
deploy/cloudflare/monitor-worker/
Provision the on-host monitoring packages and configs on bntpool with:
./scripts/provision_bntpool_monitoring.shThat script installs Prometheus, Alertmanager, node exporter, blackbox exporter, and the local Discord relay. Grafana provisioning files are included in-repo and can be installed separately if Grafana is present on the host.
When you need a fresh blocknet-core daemon binary for the pool host, build it
locally from this repo using the sibling daemon checkout instead of compiling on
bntpool.
From blocknet-pool/:
./scripts/build_blocknet_daemon.shThat writes the daemon artifact to build/blocknet-core-linux-amd64.
To stage it on the server without restarting the daemon yet:
./scripts/build_blocknet_daemon.sh --upload bntpoolFor the full repeatable deploy path, including the managed blocknetd.service
unit, release directory rotation, symlink switch, restart, and /api/status
verification:
./scripts/deploy_blocknet_daemon_bntpool.shBy default the build script reads source from ../blocknet-core, derives the required Go
version from that repo's go.mod, builds through the daemon repo's Dockerfile,
and uploads to /opt/blocknet/blocknet-core/blocknet.new.
npm --prefix frontend ci
npm --prefix frontend run build
cargo build --release -p blocknet-pool-api-app --bin blocknet-pool-api
cargo build --release -p blocknet-pool-stratum-app --bin blocknet-pool-stratum
cargo build --release -p blocknet-pool-monitor-app --bin blocknet-pool-monitor
cargo build --release -p blocknet-pool-recoveryd-app --bin blocknet-pool-recoveryd
cargo run --release -p blocknet-pool-api-app --bin blocknet-pool-api
cargo run --release -p blocknet-pool-stratum-app --bin blocknet-pool-stratum
cargo run --release -p blocknet-pool-monitor-app --bin blocknet-pool-monitor
# run the API and Stratum binaries in separate terminals
# run the monitor binary in a third terminal if you want the DB-backed status page/metrics loop
# if missing, config.json and .env are created automatically
# edit .env and set BLOCKNET_WALLET_PASSWORDCustom config:
cargo run --release -p blocknet-pool-api-app --bin blocknet-pool-api -- --config /path/to/config.json
cargo run --release -p blocknet-pool-stratum-app --bin blocknet-pool-stratum -- --config /path/to/config.json
cargo run --release -p blocknet-pool-monitor-app --bin blocknet-pool-monitor -- --config /path/to/config.json
cargo run --release -p blocknet-pool-recoveryd-app --bin blocknet-pool-recoveryd -- --config /path/to/config.jsonWeb UI is built with React + TypeScript + Vite from frontend/.
Local dev:
cd frontend
npm install
npm run devBuild the embedded API bundle:
cd frontend
npm ci
npm run buildBuild output is written to frontend/dist/ and embedded into the API binary during API builds. A fresh clone must build the frontend before cargo build or cargo run for blocknet-pool-api.
The embedded assets are served at:
GET //GET /ui(index)GET /ui-assets/app.jsGET /ui-assets/app.css
Because the UI bundle is embedded into the API binary, frontend-only changes still require rebuilding and restarting the API service.
The pool can authenticate to the daemon with either:
daemon_tokeninconfig.jsondaemon_cookie_pathinconfig.json- auto-discovery of
api.cookieviadaemon_data_dir, Blocknet wrapper config under~/.config/bnt, wrapper pidfiles, and runningblocknet/blocknet-core-*process metadata
When a daemon request returns 401 unauthorized, the pool will refresh the token from cookie once and retry.
The pool uses Postgres. Set database_url in config.json, for example:
{
"database_url": "postgres://user:password@127.0.0.1:5432/blocknet_pool"
}database_url is required. database_pool_size controls Postgres connection fan-out
(default 4).
Use the included migration script to preserve existing pool history:
scripts/migrate_sqlite_to_postgres.sh \
--sqlite /var/lib/blocknet-pool/pool.db \
--postgres 'postgres://blocknet:REPLACE_ME@127.0.0.1:5432/blocknet_pool'Recommended order:
- Stop the running pool service(s).
- Run the migration script.
- Set
database_urlin/etc/blocknet/pool/config.json. - Start the pool service(s).
- Verify
/api/statsand admin endpoints.
- Stratum now defaults to loopback bind (
stratum_host=127.0.0.1). Setstratum_hostexplicitly to expose it. - API TLS is supported via:
api_tls_cert_pathapi_tls_key_path
- If only one TLS path is set, startup logs a warning and serves HTTP.
- If Stratum is exposed publicly, place it behind a TLS terminator.
- API/UI server
- Stratum server
- Template/job manager
- Validation engine (bounded queues)
- Persistent storage (Postgres)
- Payout processor
- DB/meta-backed live snapshot bridge for split-service API fallbacks
Public endpoints (no API key required):
GET /api/infoGET /api/statsGET /api/stats/historyGET /api/stats/insightsGET /api/luckGET /api/statusGET /api/eventsGET /api/blocksGET /api/payouts/recentGET /api/miner/{address}GET /api/miner/{address}/balanceGET /api/miner/{address}/hashrate
Protected endpoints (API key required):
GET /api/minersGET /api/payoutsGET /api/feesGET /api/healthGET /api/daemon/logs/stream
When api_key is unset, protected endpoints return 503 api key not configured.
Accepted headers:
x-api-key: <api_key>Authorization: Bearer <api_key>
Paged/filterable list mode:
paged=trueenables paged response shape (items+page).- Shared query params:
limit,offset. GET /api/miners:search,sort.GET /api/blocks:finder,status,sort.GET /api/payouts:address,tx_hash,sort.GET /api/fees:fee_address,sort.
Daemon log stream details:
- Admin UI includes a live daemon log viewer tab.
- Stream endpoint:
GET /api/daemon/logs/stream?tail=200. - Log source fallback order:
journalctl -a -u blocknetd.servicetail -F <daemon_data_dir>/debug.log
GET /(dashboard)GET /ui(alias)- Multi-tab WebUI includes:
- pool + onboarding info (
/api/info) - API key auth UX for protected routes
- miner lookup
- miners/blocks/payouts/fees tables with filter + pagination
- live trend charts
- operator health panel (
/api/health) - live daemon logs panel (
/api/daemon/logs/stream)
- pool + onboarding info (
- Login supports protocol negotiation (
protocol_version,capabilities) - Login rejects malformed payout addresses early (base58 + checksum-compatible Blocknet stealth address validation)
stratum_submit_v2_required=true(default): requires protocol v2 +submit_claimed_hashstratum_submit_v2_required=false: allows legacy submits withoutclaimed_hash(full verification path)- Per-connection submit rate limiting is enabled (
stratum_submit_rate_limit_window,stratum_submit_rate_limit_max) - Queue pressure returns
server busy, retry(no inline bypass) - Per-connection vardiff retargeting is enabled by default to target a small number of shares per window (
vardiff_*config keys) - Vardiff difficulty is cached per
address+workerand reused on reconnect/restart when the hint is fresh (1h TTL), reducing post-restart ramp-up. - Default vardiff profile assumes a weak baseline miner and aims for ~10 shares / 5 minutes (
initial_share_difficulty=60,vardiff_target_shares=10) - Template refresh identity uses stable tip fields (
height,network_target,prev_hash) to avoid daemon template churn while still refreshing on meaningful tip/template transitions. - Assignment submits on a previous template are accepted only inside a short grace window (
stale_submit_grace, default5s) based on when the share was received. - Daemon SSE tip events (
/api/events) are enabled by default (sse_enabled=true) and mark templates stale fromnew_blockhash+height. - Timestamp-only
new_blockchanges do not trigger refreshes; only hash/height changes can trigger staleness. - Same-height hash-change refresh is disabled by default (
refresh_on_same_height=false) to avoid replay churn; enable it only if you want immediate same-height reorg reaction.
- Risk escalation persistence now uses an atomic update path per address.
- Accepted-share hashrate tracking keeps a 1-hour window with a hard in-memory cap.
- Template refresh matching now uses stable identity fields to catch meaningful same-height updates.
- API key comparison is currently direct string equality by design for this pool deployment model; this is accepted for now and is not treated as a blocker.
payouts_enabledtoggles payout sending globally.payout_pause_filepauses payouts when the file exists.pplns_window_durationcontrols the time-based PPLNS lookback window (default6h).payout_wait_priority_thresholdpromotes queued payouts to longest-waiting-first after they have waited at least the configured duration (default6h).payout_min_verified_ratiois a hard verified-difficulty gate. Keeping it near the sampler coverage can exclude honest miners due to sampling variance and vardiff.payout_provisional_cap_multipliercaps aged provisional difficulty relative to verified difficulty. Prefer a cap over a hard ratio cutoff when you want reduced credit instead of zero credit.- Keep
sample_rateandmin_sample_everycomfortably above the payout policy's effective verified-share target so honest miners do not flap around the cutoff. - Optional caps:
payout_max_recipients_per_tickpayout_max_total_per_tickpayout_max_per_recipient
- Retention worker runs on
retention_interval. - Old rows are rolled up into summary tables before prune:
share_daily_summariespayout_daily_summaries
- Retention controls:
shares_retentionpayouts_retention
get_total_share_countand total rejected share metrics include rolled-up share summaries.
- Added CI smoke harness:
scripts/ci_e2e_smoke.sh(mock daemon + real pool process + Stratum/API probe).
The pool expects a running Blocknet daemon API with mining + wallet routes enabled.
At minimum:
GET /api/statusGET /api/mining/blocktemplatePOST /api/mining/submitblockGET /api/block/{id}GET /api/wallet/addressGET /api/wallet/balancePOST /api/wallet/sendPOST /api/wallet/loadPOST /api/wallet/unlock
- This repository no longer uses Go; runtime is Rust-only.