Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ It is the integration-layer counterpart to Open Location Stack's mapping work: o

The current repository is still evolving quickly, but the intended direction is already clear: a production-grade, open source hub that teams can run, extend, adapt, and integrate without being locked into vendor-specific middleware.

The software documentation for the current implementation is published at [Open Location Hub Docs](/open-location-hub/docs/). That section is generated from the repository's `docs/` directory and is intended to stay aligned with the code as the project evolves.

## Business value

Open Location Hub is intended to give vendors, integrators, and enterprise teams an open integration backbone they can actually use in commercial delivery models.
Expand Down Expand Up @@ -94,6 +96,8 @@ This project is still early and not yet feature complete. It should be treated a

If you care about interoperable RTLS infrastructure, now is the right time to get involved. Try the code, review the API direction, open issues, and contribute pull requests to help shape the implementation.

[Browse the generated docs](/open-location-hub/docs/)

[Learn about Floor Plan Editor](/floor-plan-editor/)

[View the repository on GitHub](https://github.com/Open-Location-Stack/open-location-hub)
16 changes: 16 additions & 0 deletions content/open-location-hub/docs/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
---
title: "Software Documentation"
description: "Use the documents in this directory for software behavior, runtime configuration, and integration guidance."
draft: false
generated: true
generated_from: "docs/index.md"
github_url: "https://github.com/Open-Location-Stack/open-location-hub/blob/main/docs/index.md"
---
_This page is generated from the Open Location Hub source documentation and should not be edited in the website repository._

Use the documents in this directory for software behavior, runtime configuration, and integration guidance.

- `architecture.md`: system structure, processing flows, and trust boundaries
- `configuration.md`: environment variables and runtime tuning
- `auth.md`: JWT auth modes, authorization model, and permission file behavior
- `rpc.md`: REST-facing RPC usage and control-plane behavior
100 changes: 100 additions & 0 deletions content/open-location-hub/docs/architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
---
title: "Architecture"
description: "Generated documentation page for Architecture."
draft: false
generated: true
generated_from: "docs/architecture.md"
github_url: "https://github.com/Open-Location-Stack/open-location-hub/blob/main/docs/architecture.md"
---
_This page is generated from the Open Location Hub source documentation and should not be edited in the website repository._

## Layers
- `cmd/hub`: process bootstrap and wiring
- `internal/config`: environment-driven configuration
- `internal/httpapi`: API surface and handlers
- `internal/ws`: OMLOX WebSocket wrapper protocol, subscriptions, and fan-out
- `internal/storage/postgres`: durable store
- `internal/mqtt`: MQTT topic mapping and broker integration
- `internal/auth`: token verification middleware
- `internal/rpc`: local-method dispatch, MQTT RPC bridging, announcements, and aggregation
- `internal/hub`: shared CRUD, ingest, derived event generation, collision evaluation, and internal event bus emission

## Metadata And Hot State
- Postgres is the durable source of truth for hub metadata, zones, fences, trackables, and location providers.
- The runtime resolves the singleton hub metadata row before the service starts so one stable `hub_id` and label are available for startup validation, internal event provenance, and identify responses.
- The hub loads those resources into an immutable in-memory metadata snapshot before it accepts traffic.
- Successful CRUD writes update Postgres first, then update the in-memory snapshot, invalidate any affected derived metadata such as zone transforms, and emit a `metadata_changes` bus event.
- A background reconcile loop reloads durable metadata periodically and emits the same `metadata_changes` notifications when it detects out-of-band create, update, or delete drift.
- Decision-critical ingest state is kept in process memory:
- dedup windows
- latest provider-source locations
- latest trackable locations and WGS84 motions
- proximity hysteresis state
- fence membership state
- collision pair state

## Event Fan-Out
1. REST, MQTT, or WebSocket ingest enters the shared hub service.
2. The hub validates, normalizes, deduplicates, updates in-memory transient state, and derives follow-on events.
3. The hub emits normalized internal events for locations, proximities, trackable motions, fence events, optional collision events, and metadata changes.
4. MQTT and WebSocket consume that same event stream and publish transport-specific payloads.

Implications:
- ingest logic is shared across REST, MQTT, and WebSocket
- MQTT is no longer the only downstream publication path
- the internal event seam decouples downstream publication from MQTT-specific topics
- hub-issued UUIDs for REST-managed resources, derived fence/collision events, and RPC caller IDs now use UUIDv7 so emitted identifiers are time-sortable
- internal hub events carry the persisted `origin_hub_id` so downstream transports preserve source provenance

## RPC Control Plane
1. A client calls `GET /v2/rpc/available` or `PUT /v2/rpc` over HTTP.
2. REST auth verifies the bearer token and route-level access.
3. The RPC bridge applies method-level authorization for discovery or invocation.
4. The bridge looks up the method in a unified registry containing:
- hub-owned local methods
- MQTT-discovered external handlers
5. The bridge either:
- handles the method locally
- forwards it to MQTT
- or does both and aggregates responses
6. The bridge returns a JSON-RPC result or JSON-RPC error payload to the HTTP caller.

Built-in identify behavior:
- `com.omlox.identify` returns the persisted hub label as its `name`
- `com.omlox.identify` also returns the stable persisted `hub_id`

Trust boundaries:
- HTTP clients should talk to the hub, not directly to MQTT devices
- MQTT should be restricted to the hub and trusted device/adaptor components
- the hub is the policy, audit, and handler-selection boundary for control-plane actions

## Proximity Resolution Path
1. A REST, WebSocket, or MQTT `Proximity` update enters the shared hub service.
2. The hub resolves the referenced zone by `zone.id` or `zone.foreign_id`.
3. Only proximity-oriented zones are accepted for this path (`rfid` and `ibeacon`).
4. The hub loads transient per-provider proximity state from the in-memory processing state.
5. The resolver applies hub defaults plus any `Zone.properties.proximity_resolution` overrides.
6. Hysteresis rules decide whether to stay in the current zone or switch to the new candidate zone.
7. The hub emits a derived local `Location` using the resolved zone position and then continues through the normal location pipeline.

Resolver notes:
- durable configuration lives in Postgres as part of the zone resource
- transient proximity membership state lives in the in-memory processing state
- derived location metadata includes hub extension fields such as `resolution_method`, `resolved_zone_id`, and `sticky`

Resolver scope:
- the resolver emits the configured zone position as the derived point
- proximity resolution supports static proximity zones
- resolution policy is driven by hub defaults and zone-specific overrides

## Contract-first flow
1. Update OpenAPI spec.
2. Regenerate generated server/types.
3. Implement handler behavior.
4. Validate with tests and check pipeline.

## WebSocket Notes
- `GET /v2/ws/socket` is implemented outside the REST OpenAPI contract because it is a protocol companion surface rather than a generated REST endpoint.
- When auth is enabled, WebSocket messages authenticate with `params.token` and apply dedicated topic publish/subscribe authorization.
- `collision_events` is a known topic but remains configuration-gated by `COLLISIONS_ENABLED`.
- `metadata_changes` is a subscribe-only topic that carries lightweight metadata replication notifications shaped as `{id,type,operation,timestamp}`.
236 changes: 236 additions & 0 deletions content/open-location-hub/docs/auth.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,236 @@
---
title: "Authentication and Authorization"
description: "This project supports standards-based JWT bearer authentication for the REST API and an authorization model built around JWT claims plus a server-side permissions file."
draft: false
generated: true
generated_from: "docs/auth.md"
github_url: "https://github.com/Open-Location-Stack/open-location-hub/blob/main/docs/auth.md"
---
_This page is generated from the Open Location Hub source documentation and should not be edited in the website repository._

This project supports standards-based JWT bearer authentication for the REST API and an authorization model built around JWT claims plus a server-side permissions file.

The same token verifier is also used for the OMLOX WebSocket surface, but WebSocket authentication happens per message through `params.token` instead of the HTTP `Authorization` header.

## Modes

- `none`: disable auth checks
- `oidc`: verify bearer tokens through OIDC discovery and JWKS
- `static`: verify bearer tokens against static PEM keys or JWKS URLs
- `hybrid`: accept either OIDC-verified or static-key tokens

## OIDC and JWKS

For `oidc` mode, the hub loads issuer metadata from `AUTH_ISSUER`, discovers the provider JWKS endpoint, and verifies JWT signatures and standard claims. Provider metadata and verifier state are cached and refreshed according to `AUTH_OIDC_REFRESH_TTL` instead of being reloaded on every request.

Relevant settings:

- `AUTH_ISSUER`
- `AUTH_AUDIENCE`
- `AUTH_ALLOWED_ALGS`
- `AUTH_CLOCK_SKEW`
- `AUTH_HTTP_TIMEOUT`
- `AUTH_OIDC_REFRESH_TTL`

## Authorization Model

Authorization uses a role and ownership based model:

- authenticate the bearer token first
- extract a role-like claim from the JWT via `AUTH_ROLES_CLAIM`
- load path permissions from `AUTH_PERMISSIONS_FILE`
- optionally enforce ownership checks with the claim configured by `AUTH_OWNED_RESOURCES_CLAIM`

Supported permission values:

- `CREATE_ANY`
- `READ_ANY`
- `UPDATE_ANY`
- `DELETE_ANY`
- `CREATE_OWN`
- `READ_OWN`
- `UPDATE_OWN`
- `DELETE_OWN`

Method mapping:

- `GET` and `HEAD` require `READ_*`
- `POST` requires `CREATE_*`
- `PUT` and `PATCH` require `UPDATE_*`
- `DELETE` requires `DELETE_*`

`*_OWN` permissions apply to routes that include explicit path identifiers, such as `/v2/providers/:providerId`. Collection routes such as `/v2/zones` use the corresponding `*_ANY` semantics.

## Permissions File

The permissions file is YAML. Top-level keys are values from the configured role claim. In production this would usually be a role or group claim. For the included Dex development fixture, the role claim is set to `email` because Dex's local password database produces deterministic user identity claims without extra role mapping. That is a development convenience, not a production recommendation.

Example:

```yaml
admin@example.com:
description: Full access
/v2/*:
- CREATE_ANY
- READ_ANY
- UPDATE_ANY
- DELETE_ANY
rpc:
discover: true
invoke:
"*": true

reader@example.com:
description: Read-only access
/v2/zones:
- READ_ANY
/v2/zones/:zoneId:
- READ_ANY
/v2/rpc/available:
- READ_ANY
rpc:
discover: true
invoke:
com.omlox.ping: true
com.omlox.identify: true
```

Path placeholders are used for ownership checks. The hub derives claim keys from route parameter names. For example `:providerId` maps to `provider_ids`.

RPC policy entries are evaluated after route-level authorization. They use a
dedicated `rpc` section per role:

- `discover: true` allows `GET /v2/rpc/available`
- `invoke` lists allowed method names
- `invoke` entries may be:
- exact method names such as `com.omlox.ping`
- prefix wildcards such as `com.vendor.*`
- `*` for full RPC invocation access

This means a role can be allowed to reach the RPC endpoint path but still be
blocked from invoking a specific method.

WebSocket policy entries are evaluated separately from REST route permissions. They use a dedicated `websocket` section per role:

- `subscribe` lists topic names or wildcard patterns the role may subscribe to
- `publish` lists topic names or wildcard patterns the role may send `message` events to

Example:

```yaml
admin@example.com:
websocket:
subscribe:
"*": true
publish:
location_updates: true
proximity_updates: true
```

The WebSocket policy matcher supports exact topic names and suffix-style wildcard patterns such as `location_*`.

Subscribe-only topics include:
- `location_updates`
- `location_updates:geojson`
- `proximity_updates`
- `trackable_motions`
- `fence_events`
- `fence_events:geojson`
- `collision_events` when collisions are enabled
- `metadata_changes` for resource create, update, and delete notifications on zones, fences, trackables, and location providers

## Ownership Claims

Ownership-aware rules use the claim configured by `AUTH_OWNED_RESOURCES_CLAIM`.

Expected shape:

```json
{
"<owned_resources_claim>": {
"provider_ids": ["provider-1"],
"trackable_ids": ["trackable-1"],
"zone_ids": ["zone-1"],
"fence_ids": ["fence-1"],
"source_ids": ["source-1"]
}
}
```

For `*_OWN` permissions, the request path parameter must be present in the matching owned-resource list.

## Error Handling

- `401 Unauthorized`: missing bearer header, malformed token, invalid signature, bad issuer, bad audience, expired token, or other authentication failure
- `403 Forbidden`: authenticated token lacks a matching permission or ownership claim
- `403 Forbidden` on RPC also covers missing method-level discovery or invocation permission
- WebSocket auth failures are returned as OMLOX wrapper `error` events with code `10004`

Authentication failures return a `WWW-Authenticate: Bearer` header and the API error body.

## WebSocket Authentication

When auth is enabled:
- every WebSocket `subscribe` and `message` event must carry the JWT access token in `params.token`
- the hub authenticates and authorizes each message independently
- the WebSocket upgrade itself is intentionally allowed without an HTTP bearer header so the OMLOX `params.token` model can be used
- route-style REST permissions do not grant WebSocket topic access automatically; the `websocket` policy block must allow the topic

If a topic is valid but disabled by configuration, the WebSocket layer returns an OMLOX wrapper `error` event with code `10002` and a descriptive message instead of treating it as an unknown topic.

## Dex Development Setup

This repository includes a Dex fixture at [tools/dex/config.yaml](https://github.com/Open-Location-Stack/open-location-hub/blob/main/tools/dex/config.yaml) and a matching permissions file at [config/auth/permissions.yaml](https://github.com/Open-Location-Stack/open-location-hub/blob/main/config/auth/permissions.yaml).

`docker compose` starts Dex on port `5556` and configures the app container to verify Dex-issued tokens with:

- `AUTH_MODE=oidc`
- `AUTH_ISSUER=http://dex:5556/dex`
- `AUTH_AUDIENCE=open-rtls-cli`
- `AUTH_ROLES_CLAIM=email`

Included test users:

- `admin@example.com` / `testpass123`
- `reader@example.com` / `testpass123`
- `owner@example.com` / `testpass123`

Fetch a token:

```bash
curl -sS -X POST http://localhost:5556/dex/token \
-u open-rtls-cli:cli-secret \
-H 'Content-Type: application/x-www-form-urlencoded' \
--data 'grant_type=password&scope=openid%20email%20profile&username=admin@example.com&password=testpass123'
```

Use the returned `access_token` as the bearer token when calling the hub.

## Other Providers

Keycloak and similar OIDC providers fit the same model if they expose:

- issuer discovery
- JWKS
- a stable audience for the hub
- a claim that can be mapped via `AUTH_ROLES_CLAIM`

For production deployments, prefer a real role or group claim instead of the Dex development fixture's email-based mapping. The hub is intended to verify JWT access tokens from the production IdP, not development-specific token handling.

## RPC Security Guidance

For RPC in production:
- require JWT auth
- treat `GET /v2/rpc/available` as sensitive metadata
- grant `com.omlox.ping` and `com.omlox.identify` more broadly only if operators really need them
- grant `com.omlox.core.xcmd` only to tightly controlled roles or automation identities
- keep MQTT broker access narrow so user-facing applications cannot bypass the hub's policy and audit layer

## End-to-End Coverage

The integration suite boots Dex and the hub, obtains a bearer token from Dex, and proves:

- authenticated requests reach protected endpoints
- missing or invalid tokens return `401`
- insufficient permissions return `403`
- ownership-restricted routes reject tokens that lack owned-resource claims
Loading
Loading