diff --git a/CHANGELOG.md b/CHANGELOG.md
index 8cd5faf3..38b26327 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,10 +5,17 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
-## [Unreleased]
+## [13.0.0] — 2026-03-03
+
+### Added
+
+- **Observer API stabilized (B3)** — `subscribe()` and `watch()` promoted to `@stability stable` with `@since 13.0.0` annotations. Fixed `onError` callback type from `(error: Error)` to `(error: unknown)` to match runtime catch semantics. `watch()` pattern param now correctly typed as `string | string[]` in `_wiredMethods.d.ts`.
+- **`graph.patchMany()` batch patch API (B11)** — applies multiple patch callbacks sequentially. Each callback sees state from prior commits. Returns array of commit SHAs. Inherits reentrancy guard from `graph.patch()`.
+- **Causality bisect (B2)** — `BisectService` performs binary search over a writer's patch chain to find the first bad patch. CLI: `git warp bisect --good --bad --test --writer `. O(log N) materializations. Exit codes: 0=found, 1=usage, 2=range error, 3=internal.
### Changed
+- **BREAKING: `getNodeProps()` returns `Record` instead of `Map` (B100)** — aligns with `getEdgeProps()` which already returns a plain object. Callers must replace `.get('key')` with `.key` or `['key']`, `.has('key')` with `'key' in props`, and `.size` with `Object.keys(props).length`. `ObserverView.getNodeProps()` follows the same change.
- **GraphPersistencePort narrowing (B145)** — domain services now declare focused port intersections (`CommitPort & BlobPort`, etc.) in JSDoc instead of the 23-method composite `GraphPersistencePort`. Removed `ConfigPort` from the composite (23 → 21 methods); adapters still implement `configGet`/`configSet` on their prototypes. Zero behavioral change.
- **Codec trailer validation extraction (B134, B138)** — created `TrailerValidation.js` with `requireTrailer()`, `parsePositiveIntTrailer()`, `validateKindDiscriminator()`. All 4 message codec decoders now use shared helpers exclusively. Patch and Checkpoint decoders now also perform semantic field validation (graph name, writer ID, OID, SHA-256) matching the Audit decoder pattern. Internal refactor for valid inputs, with stricter rejection of malformed messages.
- **HTTP adapter shared utilities (B135)** — created `httpAdapterUtils.js` with `MAX_BODY_BYTES`, `readStreamBody()`, `noopLogger`. Eliminates duplication across Node/Bun/Deno HTTP adapters. Internal refactor, no behavioral change.
@@ -25,6 +32,11 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- **Fake timer lifecycle (B131)** — moved `vi.useFakeTimers()` from `beforeAll` to `beforeEach` and `vi.useRealTimers()` into `afterEach` in `WarpGraph.watch.test.js`.
- **Test determinism (B132)** — seeded `Math.random()` in benchmarks with Mulberry32 RNG (`0xDEADBEEF`), added `seed: 42` to all fast-check property tests, replaced random delays in stress test with deterministic values.
- **Global mutation documentation (B133)** — documented intentional `globalThis.Buffer` mutation in `noBufferGlobal.test.js` and `crypto.randomUUID()` usage in `SyncAuthService.test.js`.
+- **Code review fixes (B148):**
+ - **CLI hardening** — added `--writer` validation to bisect, SHA format regex on `--good`/`--bad`, rethrow ENOENT/EACCES from test command runner instead of swallowing.
+ - **BisectService cleanup** — removed dead code, added invariant comment, replaced `BisectResult` interface with discriminated union type, fixed exit code constant.
+ - **Prototype-pollution hardening** — `Object.create(null)` for property bags in `getNodeProps`, `getEdgeProps`, `getEdges`, `buildPropsSnapshot`; fixed indexed-path null masking in `getNodeProps`.
+ - **Docs housekeeping** — reconciled ROADMAP inventory counts (24→29 done), fixed M11 sequencing, removed done items from priority tiers, fixed stale test vector counts (6→9), corrected Deno test name, moved B100 to `### Changed`.
## [12.4.1] — 2026-02-28
@@ -1528,7 +1540,7 @@ Implements [Paper III](https://doi.org/10.5281/zenodo.17963669) (Computational H
#### Query API (V7 Task 7)
- **`graph.hasNode(nodeId)`** - Check if node exists in materialized state
-- **`graph.getNodeProps(nodeId)`** - Get all properties for a node as Map
+- **`graph.getNodeProps(nodeId)`** - Get all properties for a node (returns `Record` since v13.0.0)
- **`graph.neighbors(nodeId, dir?, label?)`** - Get neighbors with direction/label filtering
- **`graph.getNodes()`** - Get all visible node IDs
- **`graph.getEdges()`** - Get all visible edges as `{from, to, label}` array
diff --git a/README.md b/README.md
index acf89d21..043283f4 100644
--- a/README.md
+++ b/README.md
@@ -8,12 +8,14 @@
-## What's New in v12.4.1
+## What's New in v13.0.0
-- **JSDoc total coverage** — eliminated all unsafe `{Object}`, `{Function}`, `{*}` type patterns across 135 files (190+ sites), replacing them with precise inline typed shapes.
-- **Zero tsc errors** — fixed tsconfig split-config includes and type divergences; 0 errors across all three tsconfig targets.
-- **JSR dry-run fix** — worked around a deno_ast 0.52.0 panic caused by overlapping text-change entries for duplicate import specifiers.
-- **`check-dts-surface.js` regex fix** — default-export parsing now correctly captures identifiers instead of keywords for `export default class/function` patterns.
+- **BREAKING: `getNodeProps()` returns `Record`** — aligns with `getEdgeProps()`. Replace `.get('key')` with `.key`, `.has('key')` with `'key' in props`, `.size` with `Object.keys(props).length`.
+- **BREAKING: Removed `PerformanceClockAdapter` and `GlobalClockAdapter`** — use `ClockAdapter` directly.
+- **`graph.patchMany()`** — batch multiple patches sequentially; each callback sees prior state.
+- **`git warp bisect`** — binary search over writer patch history to find the first bad commit. O(log N) materializations.
+- **Observer API stable** — `subscribe()` and `watch()` promoted to stable with `@since 13.0.0`.
+- **`BisectService`** — domain service exported for programmatic use.
See the [full changelog](CHANGELOG.md) for details.
@@ -183,7 +185,7 @@ Query methods auto-materialize by default. Just open a graph and start querying:
```javascript
await graph.getNodes(); // ['user:alice', 'user:bob']
await graph.hasNode('user:alice'); // true
-await graph.getNodeProps('user:alice'); // Map { 'name' => 'Alice', 'role' => 'admin' }
+await graph.getNodeProps('user:alice'); // { name: 'Alice', role: 'admin' }
await graph.neighbors('user:alice', 'outgoing'); // [{ nodeId: 'user:bob', label: 'manages', direction: 'outgoing' }]
await graph.getEdges(); // [{ from: 'user:alice', to: 'user:bob', label: 'manages', props: {} }]
await graph.getEdgeProps('user:alice', 'user:bob', 'manages'); // { weight: 0.9 } or null
@@ -371,7 +373,7 @@ const view = await graph.observer('publicApi', {
});
const users = await view.getNodes(); // only user:* nodes
-const props = await view.getNodeProps('user:alice'); // Map without ssn/password
+const props = await view.getNodeProps('user:alice'); // { name: 'Alice', ... } without ssn/password
const result = await view.query().match('user:*').where({ role: 'admin' }).run();
// Measure information loss between two observer perspectives
diff --git a/ROADMAP.md b/ROADMAP.md
index bc252f37..b8872024 100644
--- a/ROADMAP.md
+++ b/ROADMAP.md
@@ -1,7 +1,7 @@
# ROADMAP — @git-stunts/git-warp
-> **Current version:** v12.4.1
-> **Last reconciled:** 2026-03-02 (M14 HYGIENE added from HEX_AUDIT; completed items archived to COMPLETED.md; BACKLOG.md retired)
+> **Current version:** v13.0.0
+> **Last reconciled:** 2026-03-03 (v13.0.0 release: M11 COMPASS II complete, B100/B140 breaking, B44/B124/B125/B146 done)
> **Completed milestones:** [docs/ROADMAP/COMPLETED.md](docs/ROADMAP/COMPLETED.md)
---
@@ -25,11 +25,11 @@
### M10.T4 — Causality Bisect Spec
-- **Status:** `PENDING`
+- **Status:** `DONE` (spec existed; implementation completed in M11)
**Items:**
-- **B2 (spec only)** (CAUSALITY BISECT) — design the bisect CLI contract + data model. Commit spec with test vectors. Full implementation deferred to M11 — but the spec lands here so bisect is available as a debugging tool during M10 trust hardening.
+- **B2 (spec only)** ✅ (CAUSALITY BISECT) — Spec committed at `docs/specs/BISECT_V1.md`. Full implementation shipped in M11/v13.0.0.
**M10 Gate:** Signed ingress enforced end-to-end; trust E2E receipts green; B63 GC isolation verified under concurrent writes; B64 sync payload validation green; B65 divergence logging verified; B2 spec committed with test vectors.
@@ -165,37 +165,9 @@ Design-only items. RFCs filed — implementation deferred to future milestones.
---
-## Milestone 11 — COMPASS II
+## Milestone 11 — COMPASS II ✅ COMPLETE (v13.0.0)
-**Theme:** Developer experience
-**Objective:** Ship bisect, public observer API, and batch patch ergonomics.
-**Triage date:** 2026-02-17
-
-### M11.T1 — Causality Bisect (Implementation)
-
-- **Status:** `PENDING`
-
-**Items:**
-
-- **B2 (implementation)** (CAUSALITY BISECT) — full implementation building on M10 spec. Binary search for first bad tick/invariant failure. `git bisect` for WARP.
-
-### M11.T2 — Observer API
-
-- **Status:** `PENDING`
-
-**Items:**
-
-- **B3** (OBSERVER API) — public event contract. Internal soak period over (shipped in PULSE, used internally since). Stabilize the public surface.
-
-### M11.T3 — Batch Patch API
-
-- **Status:** `PENDING`
-
-**Items:**
-
-- **B11** (`graph.patchMany(fns)` BATCH API) — sequence multiple patch callbacks atomically, each seeing the ref left by the previous. Natural complement to `graph.patch()`.
-
-**M11 Gate:** Bisect correctness verified on seeded regressions; observer contract snapshot-tested; patchMany passes no-coordination suite.
+Archived to [COMPLETED.md](docs/ROADMAP/COMPLETED.md#milestone-11--compass-ii).
---
@@ -209,10 +181,10 @@ Items picked up opportunistically without blocking milestones. No milestone assi
| ID | Item |
|----|------|
-| B124 | **TRUST PAYLOAD PARITY TESTS** — assert CLI `trust` and `AuditVerifierService.evaluateTrust()` emit shape-compatible error payloads. From BACKLOG 2026-02-27. |
-| B125 | **`CachedValue` NULL-PAYLOAD SEMANTIC TESTS** — document and test whether `null` is a valid cached value. From BACKLOG 2026-02-27. |
+| ~~B124~~ | ✅ ~~**TRUST PAYLOAD PARITY TESTS**~~ — 22 tests verifying CLI vs service shape parity. Done in v13.0.0. |
+| ~~B125~~ | ✅ ~~**`CachedValue` NULL-PAYLOAD SEMANTIC TESTS**~~ — 3 tests documenting null = "no value" sentinel. Done in v13.0.0. |
| B127 | **DENO SMOKE TEST** — `npm run test:deno:smoke` for fast local pre-push confidence without full Docker matrix. From BACKLOG 2026-02-25. |
-| B44 | **SUBSCRIBER UNSUBSCRIBE-DURING-CALLBACK E2E** — event system edge case; known bug class that bites silently |
+| ~~B44~~ | ✅ ~~**SUBSCRIBER UNSUBSCRIBE-DURING-CALLBACK E2E**~~ — 3 edge-case tests (cross-unsubscribe, subscribe-during-callback, unsubscribe-in-onError). Done in v13.0.0. |
| B34 | **DOCS: SECURITY_SYNC.md** — extract threat model from JSDoc into operator doc |
| B35 | **DOCS: README INSTALL SECTION** — Quick Install with Docker + native paths |
| B36 | **FLUENT STATE BUILDER FOR TESTS** — `StateBuilder` helper replacing manual `WarpStateV5` literals |
@@ -229,7 +201,7 @@ Items picked up opportunistically without blocking milestones. No milestone assi
| B79 | **WARPGRAPH CONSTRUCTOR LIFECYCLE DOCS** — document cache invalidation strategy for 25 instance variables: which operations dirty which caches, which flush them. From B-AUDIT-16 (TSK TSK). **File:** `src/domain/WarpGraph.js:69-198` |
| B80 | **CHECKPOINTSERVICE CONTENT BLOB UNBOUNDED MEMORY** — iterates all properties into single `Set` before tree serialization. Stream content OIDs in batches. From B-AUDIT-10 (JANK). **File:** `src/domain/services/CheckpointService.js:224-226` |
| B81 | **`attachContent` ORPHAN BLOB GUARD** — `attachContent()` unconditionally writes blob before `setProperty()`. Validate before push to prevent orphan blobs. From B-CODE-2. **File:** `src/domain/services/PatchBuilderV2.js` |
-| B146 | **UNIFY `CorePersistence` / `FullPersistence` TYPEDEFS** — `CorePersistence` (`WarpPersistence.js`) and `FullPersistence` (`WarpGraph.js`) are identical `CommitPort & BlobPort & TreePort & RefPort` intersections. Consolidate into one canonical typedef and update all import sites. From B145 PR review. |
+| ~~B146~~ | ✅ ~~**UNIFY `CorePersistence` / `FullPersistence` TYPEDEFS**~~ — replaced `FullPersistence` with imported `CorePersistence`. Done in v13.0.0. |
| B147 | **RFC FIELD COUNT DRIFT DETECTOR** — script that counts WarpGraph instance fields (grep `this._` in constructor) and warns if design RFC field counts diverge. Prevents stale numbers in `warpgraph-decomposition.md`. From B145 PR review. |
### CI & Tooling Pack
@@ -299,7 +271,7 @@ Items parked with explicit conditions for promotion.
| B20 | **TRUST RECORD ROUND-TRIP SNAPSHOT TEST** | Promote if trust record schema changes |
| B21 | **TRUST SCHEMA DISCRIMINATED UNION** | Promote if superRefine causes a bug or blocks a feature |
| B27 | **`TrustKeyStore` PRE-VALIDATED KEY CACHE** | Promote when `verifySignature` appears in any p95 flame graph above 5% of call time |
-| B100 | **MAP vs RECORD ASYMMETRY** — `getNodeProps()` returns Map, `getEdgeProps()` returns Record. Breaking change either way. From B-FEAT-3. | Promote with next major version RFC |
+| ~~B100~~ | ✅ ~~**MAP vs RECORD ASYMMETRY**~~ — `getNodeProps()` now returns `Record`. Done in v13.0.0. | ~~Promote with next major version RFC~~ |
| B101 | **MERMAID `~~~` INVISIBLE-LINK FRAGILITY** — undocumented Mermaid feature for positioning. From B-DIAG-3. | Promote if Mermaid renderer update breaks `~~~` positioning |
---
@@ -312,21 +284,21 @@ B5, B6, B13, B17, B18, B25, B45 — rejected 2026-02-17 with cause recorded in `
## Execution Order
-### Milestones: M10 → M12 → M13 → M14 → M11
+### Milestones: M10 → M12 → M13 → M11 → M14
-1. **M10 SENTINEL** — Trust + sync safety + correctness — DONE except B2 spec
+1. **M10 SENTINEL** — Trust + sync safety + correctness — **DONE**
2. **M12 SCALPEL** — STANK audit cleanup (minus edge prop encoding) — **DONE** (all tasks complete, gate verified)
3. **M13 SCALPEL II** — Edge property canonicalization — **DONE** (internal model complete; wire-format cutover deferred by ADR 3)
-4. **M14 HYGIENE** — Test quality, DRY extraction, SOLID quick-wins — **NEXT** (from HEX_AUDIT)
-5. **M11 COMPASS II** — Developer experience (B2 impl, B3, B11) — after M14
+4. **M11 COMPASS II** — Developer experience (B2 impl, B3, B11) — ✅ **DONE** (v13.0.0), archived
+5. **M14 HYGIENE** — Test quality, DRY extraction, SOLID quick-wins — **NEXT** (from HEX_AUDIT)
### Standalone Priority Sequence
Pick opportunistically between milestones. Recommended order within tiers:
1. ~~**Immediate** (B46, B47, B26, B71, B126)~~ — **ALL DONE.**
-2. **Near-term correctness** (B44, B76, B80, B81, B124) — prioritize items touching core services
-3. **Near-term DX** (B36, B37, B43, B125, B127) — test ergonomics and developer velocity
+2. **Near-term correctness** (B76, B80, B81) — prioritize items touching core services
+3. **Near-term DX** (B36, B37, B43, B127) — test ergonomics and developer velocity
4. **Near-term docs/types** (B34, B35) — alignment and documentation
5. **Near-term tooling** (B12, B48, B49, B53, B54, B57, B28) — remaining type safety items
6. **CI & Tooling Pack** (B83, B85–B88, B119, B123, B128) — batch as one PR
@@ -349,11 +321,11 @@ Pick opportunistically between milestones. Recommended order within tiers:
| **Milestone (M12)** | 18 | B66, B67, B70, B73, B75, B105–B115, B117, B118 |
| **Milestone (M13)** | 1 | B116 (internal: DONE; wire-format: DEFERRED) |
| **Milestone (M14)** | 16 | B130–B145 |
-| **Standalone** | 39 | B12, B19, B22, B28, B34–B37, B43, B44, B48, B49, B53, B54, B57, B76, B79–B81, B83, B85–B88, B95–B99, B102–B104, B119, B123–B125, B127–B129, B146, B147 |
-| **Standalone (done)** | 23 | B26, B46, B47, B50–B52, B55, B71, B72, B77, B78, B82, B84, B89–B94, B120–B122, B126 |
-| **Deferred** | 8 | B4, B7, B16, B20, B21, B27, B100, B101 |
+| **Standalone** | 35 | B12, B19, B22, B28, B34–B37, B43, B48, B49, B53, B54, B57, B76, B79–B81, B83, B85–B88, B95–B99, B102–B104, B119, B123, B127–B129, B147 |
+| **Standalone (done)** | 29 | B26, B44, B46, B47, B50–B52, B55, B71, B72, B77, B78, B82, B84, B89–B94, B100, B120–B122, B124, B125, B126, B146, B148 |
+| **Deferred** | 7 | B4, B7, B16, B20, B21, B27, B101 |
| **Rejected** | 7 | B5, B6, B13, B17, B18, B25, B45 |
-| **Total tracked** | **122** (23 done) | |
+| **Total tracked** | **123** total; 29 standalone done | |
### STANK.md Cross-Reference
@@ -455,11 +427,11 @@ Pick opportunistically between milestones. Recommended order within tiers:
## Final Command
Every milestone has a hard gate. No milestone blurs into the next.
-Execution: M10 SENTINEL → **M12 SCALPEL** → **M13 SCALPEL II** → **M14 HYGIENE** → M11 COMPASS II. Standalone items fill the gaps.
+Execution: M10 SENTINEL → **M12 SCALPEL** → **M13 SCALPEL II** → **M11 COMPASS II** → **M14 HYGIENE**. M11 is complete and archived. Standalone items fill the gaps.
M12 is complete (including T8/T9). M13 internal canonicalization (ADR 1) is complete — canonical `NodePropSet`/`EdgePropSet` semantics, wire gate split, reserved-byte validation, version namespace separation. The persisted wire-format half of B116 is deferred by ADR 2 and governed by ADR 3 readiness gates.
-M14 HYGIENE is the current priority — test hardening, DRY extraction, and SOLID quick-wins from the HEX_AUDIT. M11 follows after M14.
+M14 HYGIENE is the current priority — test hardening, DRY extraction, and SOLID quick-wins from the HEX_AUDIT. M11 is complete and archived in COMPLETED.md.
Rejected items live in `GRAVEYARD.md`. Resurrections require an RFC.
`BACKLOG.md` retired — all intake goes directly into this file (policy in `CLAUDE.md`).
diff --git a/bin/cli/commands/bisect.js b/bin/cli/commands/bisect.js
new file mode 100644
index 00000000..17cf7179
--- /dev/null
+++ b/bin/cli/commands/bisect.js
@@ -0,0 +1,91 @@
+import { execSync } from 'node:child_process';
+import { EXIT_CODES, parseCommandArgs, usageError } from '../infrastructure.js';
+import { bisectSchema } from '../schemas.js';
+import { openGraph } from '../shared.js';
+import BisectService from '../../../src/domain/services/BisectService.js';
+
+/** @typedef {import('../types.js').CliOptions} CliOptions */
+
+const BISECT_OPTIONS = {
+ good: { type: 'string' },
+ bad: { type: 'string' },
+ test: { type: 'string' },
+};
+
+/** @param {string[]} args */
+function parseBisectArgs(args) {
+ const { values } = parseCommandArgs(args, BISECT_OPTIONS, bisectSchema);
+ return values;
+}
+
+/**
+ * Runs a shell command as the bisect test.
+ *
+ * @param {string} testCmd - Shell command to execute
+ * @param {string} sha - Candidate patch SHA (passed as env var)
+ * @param {string} graphName - Graph name (passed as env var)
+ * @returns {boolean} true if the command exits 0 (good), false otherwise (bad)
+ */
+function runTestCommand(testCmd, sha, graphName) {
+ try {
+ execSync(testCmd, {
+ stdio: 'pipe',
+ env: {
+ ...process.env,
+ WARP_BISECT_SHA: sha,
+ WARP_BISECT_GRAPH: graphName,
+ },
+ });
+ return true;
+ } catch (/** @type {unknown} */ err) {
+ // Non-zero exit (err.status is a number) → test says "bad"
+ const asRecord = /** @type {Record} */ (err);
+ if (err && typeof asRecord.status === 'number') {
+ return false;
+ }
+ // Spawn failure (ENOENT, EACCES, etc.) → rethrow so the user sees the real error
+ throw err;
+ }
+}
+
+/**
+ * Handles the `bisect` command: binary search over patch history.
+ * @param {{options: CliOptions, args: string[]}} params
+ * @returns {Promise<{payload: unknown, exitCode: number}>}
+ */
+export default async function handleBisect({ options, args }) {
+ if (options.writer === 'cli') {
+ throw usageError('bisect requires --writer ');
+ }
+
+ const { good, bad, test: testCmd } = parseBisectArgs(args);
+ const { graph, graphName } = await openGraph(options);
+ const writerId = options.writer;
+
+ const bisect = new BisectService({ graph });
+
+ const result = await bisect.run({
+ good,
+ bad,
+ writerId,
+ testFn: (_state, sha) => Promise.resolve(runTestCommand(testCmd, sha, graphName)),
+ });
+
+ if (result.result === 'range-error') {
+ return {
+ payload: { error: { code: 'E_BISECT_RANGE', message: result.message } },
+ exitCode: EXIT_CODES.NOT_FOUND,
+ };
+ }
+
+ const payload = {
+ result: 'found',
+ firstBadPatch: result.firstBadPatch,
+ writerId: result.writerId,
+ lamport: result.lamport,
+ steps: result.steps,
+ totalCandidates: result.totalCandidates,
+ };
+
+ return { payload, exitCode: EXIT_CODES.OK };
+}
diff --git a/bin/cli/commands/registry.js b/bin/cli/commands/registry.js
index 5c8850d0..5b04ae94 100644
--- a/bin/cli/commands/registry.js
+++ b/bin/cli/commands/registry.js
@@ -14,6 +14,7 @@ import handleInstallHooks from './install-hooks.js';
import handleTrust from './trust.js';
import handlePatch from './patch.js';
import handleTree from './tree.js';
+import handleBisect from './bisect.js';
/** @type {Map} */
export const COMMANDS = new Map(/** @type {[string, Function][]} */ ([
@@ -31,6 +32,7 @@ export const COMMANDS = new Map(/** @type {[string, Function][]} */ ([
['trust', handleTrust],
['patch', handlePatch],
['tree', handleTree],
+ ['bisect', handleBisect],
['view', handleView],
['install-hooks', handleInstallHooks],
]));
diff --git a/bin/cli/infrastructure.js b/bin/cli/infrastructure.js
index 48f76fc1..bcd2f21e 100644
--- a/bin/cli/infrastructure.js
+++ b/bin/cli/infrastructure.js
@@ -49,6 +49,7 @@ Commands:
seek Time-travel: step through graph history by Lamport tick
patch Decode and inspect raw patches
tree ASCII tree traversal from root nodes
+ bisect Binary search for first bad patch in writer history
view Interactive TUI graph browser (requires @git-stunts/git-warp-tui)
install-hooks Install post-merge git hook
@@ -119,6 +120,12 @@ Tree options:
--edge Follow only this edge label
--prop Annotate nodes with this property (repeatable)
--max-depth Maximum traversal depth
+
+Bisect options:
+ --good Known-good commit SHA (invariant holds)
+ --bad Known-bad commit SHA (invariant violated)
+ --test Shell command (exit 0=good, non-zero=bad)
+ --writer Writer chain to bisect (required)
`;
/**
@@ -147,7 +154,7 @@ export function notFoundError(message) {
return new CliError(message, { code: 'E_NOT_FOUND', exitCode: EXIT_CODES.NOT_FOUND });
}
-export const KNOWN_COMMANDS = ['info', 'query', 'path', 'history', 'check', 'doctor', 'materialize', 'seek', 'verify-audit', 'verify-index', 'reindex', 'trust', 'patch', 'tree', 'install-hooks', 'view'];
+export const KNOWN_COMMANDS = ['info', 'query', 'path', 'history', 'check', 'doctor', 'materialize', 'seek', 'verify-audit', 'verify-index', 'reindex', 'trust', 'patch', 'tree', 'bisect', 'install-hooks', 'view'];
const BASE_OPTIONS = {
repo: { type: 'string', short: 'r' },
diff --git a/bin/cli/schemas.js b/bin/cli/schemas.js
index e624c29e..773788db 100644
--- a/bin/cli/schemas.js
+++ b/bin/cli/schemas.js
@@ -176,6 +176,16 @@ export const seekSchema = z.object({
};
});
+// ============================================================================
+// Bisect
+// ============================================================================
+
+export const bisectSchema = z.object({
+ good: z.string().min(1, 'Missing value for --good').regex(/^[0-9a-f]{40}$/, 'Must be a full 40-character hex SHA'),
+ bad: z.string().min(1, 'Missing value for --bad').regex(/^[0-9a-f]{40}$/, 'Must be a full 40-character hex SHA'),
+ test: z.string().min(1, 'Missing value for --test'),
+}).strict();
+
// ============================================================================
// Verify-index
// ============================================================================
diff --git a/bin/cli/types.js b/bin/cli/types.js
index 7eded63d..54144106 100644
--- a/bin/cli/types.js
+++ b/bin/cli/types.js
@@ -15,7 +15,7 @@
/**
* @typedef {Object} WarpGraphInstance
- * @property {(opts?: {ceiling?: number}) => Promise} materialize
+ * @property {(opts?: {ceiling?: number}) => Promise} materialize
* @property {() => Promise>} getNodes
* @property {() => Promise>} getEdges
* @property {() => Promise} createCheckpoint
diff --git a/contracts/type-surface.m8.json b/contracts/type-surface.m8.json
index 879de404..eb89c7b4 100644
--- a/contracts/type-surface.m8.json
+++ b/contracts/type-surface.m8.json
@@ -61,7 +61,7 @@
"getNodeProps": {
"async": true,
"params": [{ "name": "nodeId", "type": "string" }],
- "returns": "Promise | null>"
+ "returns": "Promise | null>"
},
"getEdgeProps": {
"async": true,
@@ -394,7 +394,7 @@
"instance": {
"hasNode": { "async": true, "params": [{ "name": "nodeId", "type": "string" }], "returns": "Promise" },
"getNodes": { "async": true, "params": [], "returns": "Promise" },
- "getNodeProps": { "async": true, "params": [{ "name": "nodeId", "type": "string" }], "returns": "Promise | null>" },
+ "getNodeProps": { "async": true, "params": [{ "name": "nodeId", "type": "string" }], "returns": "Promise | null>" },
"getEdges": { "async": true, "params": [], "returns": "Promise }>>" },
"query": { "params": [], "returns": "QueryBuilder" }
},
@@ -412,6 +412,7 @@
"BitmapIndexReader": { "kind": "class" },
"IndexRebuildService": { "kind": "class" },
"HealthCheckService": { "kind": "class" },
+ "BisectService": { "kind": "class" },
"CommitDagTraversalService": { "kind": "class" },
"GraphPersistencePort": { "kind": "abstract-class" },
"IndexStoragePort": { "kind": "abstract-class" },
@@ -546,6 +547,7 @@
"PatchEntry": { "kind": "interface" },
"WarpStateV5": { "kind": "interface" },
"BTR": { "kind": "interface" },
+ "BisectResult": { "kind": "type" },
"BTRVerificationResult": { "kind": "interface" },
"CreateBTROptions": { "kind": "interface" },
"VerifyBTROptions": { "kind": "interface" },
diff --git a/docs/GUIDE.md b/docs/GUIDE.md
index f8284346..46eeee98 100644
--- a/docs/GUIDE.md
+++ b/docs/GUIDE.md
@@ -276,7 +276,7 @@ await graph.hasNode('user:alice'); // true
await graph.getNodes(); // ['user:alice', 'user:bob']
// Get node properties
-await graph.getNodeProps('user:alice'); // Map { 'name' => 'Alice' }
+await graph.getNodeProps('user:alice'); // { name: 'Alice' }
// Get all edges (with their properties)
await graph.getEdges();
@@ -923,7 +923,7 @@ The returned `ObserverView` is read-only and supports the same query/traverse AP
```javascript
const nodes = await view.getNodes();
-const props = await view.getNodeProps('user:alice'); // Map without 'ssn' or 'password'
+const props = await view.getNodeProps('user:alice'); // { name: 'Alice', ... } without 'ssn' or 'password'
const admins = await view.query().match('user:*').where({ role: 'admin' }).run();
const path = await view.traverse.shortestPath('user:alice', 'user:bob', { dir: 'out' });
```
diff --git a/docs/ROADMAP/COMPLETED.md b/docs/ROADMAP/COMPLETED.md
index d509a740..6078ded2 100644
--- a/docs/ROADMAP/COMPLETED.md
+++ b/docs/ROADMAP/COMPLETED.md
@@ -67,6 +67,41 @@
---
+## Milestone 11 — COMPASS II
+
+**Theme:** Developer experience
+**Objective:** Ship bisect, public observer API, and batch patch ergonomics.
+**Triage date:** 2026-02-17
+**Completed:** 2026-03-03
+
+### M11.T1 — Causality Bisect (Implementation)
+
+- **Status:** `DONE`
+
+**Items:**
+
+- **B2** ✅ (CAUSALITY BISECT) — `BisectService` + `git warp bisect` CLI. Binary search over writer patch chain. O(log N) materializations. 9 test vectors.
+
+### M11.T2 — Observer API
+
+- **Status:** `DONE`
+
+**Items:**
+
+- **B3** ✅ (OBSERVER API) — `subscribe()` and `watch()` promoted to `@stability stable` with `@since 13.0.0`. Fixed `onError` type to `unknown`. `watch()` pattern type corrected to `string | string[]`.
+
+### M11.T3 — Batch Patch API
+
+- **Status:** `DONE`
+
+**Items:**
+
+- **B11** ✅ (`graph.patchMany()` BATCH API) — sequential batch helper. Each callback sees state from prior commit. Returns array of SHAs. Inherits reentrancy guard.
+
+**M11 Gate:** ✅ All gates met. Bisect correctness verified with 9 test vectors. Observer API stable with JSDoc annotations. patchMany tested with 6 scenarios including reentrancy guard.
+
+---
+
## Milestone 12 — SCALPEL
**Theme:** Comprehensive STANK audit cleanup — correctness, performance & code quality
diff --git a/examples/scripts/explore.js b/examples/scripts/explore.js
index 11f14777..78bd68c9 100755
--- a/examples/scripts/explore.js
+++ b/examples/scripts/explore.js
@@ -66,7 +66,7 @@ async function main() {
for (const nodeId of sortedNodes) {
const props = await graph.getNodeProps(nodeId);
- const printable = props ? mapToObject(props) : {};
+ const printable = props || {};
console.log(` - ${nodeId}`);
if (Object.keys(printable).length > 0) {
console.log(` props: ${JSON.stringify(printable)}`);
diff --git a/examples/scripts/setup.js b/examples/scripts/setup.js
index 6abdb9a2..e78ea891 100755
--- a/examples/scripts/setup.js
+++ b/examples/scripts/setup.js
@@ -113,9 +113,9 @@ async function main() {
// Access node properties
const aliceProps = await graph.getNodeProps('user:alice');
const postProps = await graph.getNodeProps('post:1');
- const aliceName = aliceProps?.get('name');
- const aliceEmail = aliceProps?.get('email');
- const postTitle = postProps?.get('title');
+ const aliceName = aliceProps?.name;
+ const aliceEmail = aliceProps?.email;
+ const postTitle = postProps?.title;
console.log(`\n Alice: name="${aliceName}", email="${aliceEmail}"`);
console.log(` Post 1: title="${postTitle}"`);
diff --git a/index.d.ts b/index.d.ts
index 267aabf4..b2f630cd 100644
--- a/index.d.ts
+++ b/index.d.ts
@@ -982,6 +982,36 @@ export class CommitDagTraversalService {
*/
export { CommitDagTraversalService as TraversalService };
+/**
+ * Binary search over WARP graph history.
+ * Finds the first bad patch between a known-good and known-bad commit.
+ * @since 13.0.0
+ */
+export class BisectService {
+ constructor(options: { graph: { getWriterPatches: WarpGraph['getWriterPatches']; materialize: WarpGraph['materialize'] } });
+
+ /**
+ * Runs bisect on a single writer's patch chain.
+ */
+ run(options: {
+ good: string;
+ bad: string;
+ writerId: string;
+ testFn: (state: WarpStateV5, sha: string) => Promise;
+ }): Promise;
+}
+
+/**
+ * Result of a bisect operation.
+ *
+ * Discriminated union on `result`:
+ * - `'found'`: the first bad patch was identified.
+ * - `'range-error'`: the good/bad range was invalid (e.g., SHAs not found, same SHA, not ancestor).
+ */
+export type BisectResult =
+ | { result: 'found'; firstBadPatch: string; writerId: string; lamport: number; steps: number; totalCandidates: number }
+ | { result: 'range-error'; message: string };
+
/**
* Error class for graph traversal operations.
*/
@@ -1223,7 +1253,7 @@ export class ObserverView {
getNodes(): Promise;
/** Gets filtered properties for a visible node (null if hidden or missing) */
- getNodeProps(nodeId: string): Promise | null>;
+ getNodeProps(nodeId: string): Promise | null>;
/** Gets all visible edges (both endpoints must match the observer pattern) */
getEdges(): Promise }>>;
@@ -1656,6 +1686,15 @@ export default class WarpGraph {
*/
patch(build: (patch: PatchBuilderV2) => void | Promise): Promise;
+ /**
+ * Applies multiple patches sequentially. Each callback sees the state
+ * produced by the previous commit.
+ * @since 13.0.0
+ */
+ patchMany(
+ ...builds: Array<(patch: PatchBuilderV2) => void | Promise>
+ ): Promise;
+
/**
* Returns patches from a writer's ref chain.
*/
@@ -1677,7 +1716,7 @@ export default class WarpGraph {
/**
* Gets all properties for a node from the materialized state.
*/
- getNodeProps(nodeId: string): Promise | null>;
+ getNodeProps(nodeId: string): Promise | null>;
/**
* Returns the number of property entries in the materialized state.
@@ -1906,19 +1945,27 @@ export default class WarpGraph {
/** Returns a lightweight status snapshot of the graph. */
status(): Promise;
- /** Subscribes to graph changes after each materialize(). */
+ /**
+ * Subscribes to graph changes after each materialize().
+ * @since 13.0.0
+ * @stability stable
+ */
subscribe(options: {
onChange: (diff: StateDiffResult) => void;
- onError?: (error: Error) => void;
+ onError?: (error: unknown) => void;
replay?: boolean;
}): { unsubscribe: () => void };
- /** Filtered watcher that only fires for changes matching a glob pattern. */
+ /**
+ * Filtered watcher that only fires for changes matching a glob pattern.
+ * @since 13.0.0
+ * @stability stable
+ */
watch(
pattern: string | string[],
options: {
onChange: (diff: StateDiffResult) => void;
- onError?: (error: Error) => void;
+ onError?: (error: unknown) => void;
poll?: number;
},
): { unsubscribe: () => void };
diff --git a/index.js b/index.js
index af36c64b..4483b0ee 100644
--- a/index.js
+++ b/index.js
@@ -103,6 +103,8 @@ import {
deserializeWormhole,
} from './src/domain/services/WormholeService.js';
+import BisectService from './src/domain/services/BisectService.js';
+
const TraversalService = CommitDagTraversalService;
export {
@@ -116,6 +118,7 @@ export {
HealthStatus,
CommitDagTraversalService,
TraversalService,
+ BisectService,
GraphPersistencePort,
IndexStoragePort,
diff --git a/jsr.json b/jsr.json
index 42e314b6..1ea840e4 100644
--- a/jsr.json
+++ b/jsr.json
@@ -1,6 +1,6 @@
{
"name": "@git-stunts/git-warp",
- "version": "12.4.1",
+ "version": "13.0.0",
"imports": {
"roaring": "npm:roaring@^2.7.0"
},
diff --git a/package.json b/package.json
index 008d7ebc..95d87017 100644
--- a/package.json
+++ b/package.json
@@ -1,6 +1,6 @@
{
"name": "@git-stunts/git-warp",
- "version": "12.4.1",
+ "version": "13.0.0",
"description": "Deterministic WARP graph over Git: graph-native storage, traversal, and tooling.",
"type": "module",
"license": "Apache-2.0",
diff --git a/src/domain/WarpGraph.js b/src/domain/WarpGraph.js
index 990d1a58..1f978bd1 100644
--- a/src/domain/WarpGraph.js
+++ b/src/domain/WarpGraph.js
@@ -30,9 +30,7 @@ import * as patchMethods from './warp/patch.methods.js';
import * as materializeMethods from './warp/materialize.methods.js';
import * as materializeAdvancedMethods from './warp/materializeAdvanced.methods.js';
-/**
- * @typedef {import('../ports/CommitPort.js').default & import('../ports/BlobPort.js').default & import('../ports/TreePort.js').default & import('../ports/RefPort.js').default} FullPersistence
- */
+/** @typedef {import('./types/WarpPersistence.js').CorePersistence} CorePersistence */
const DEFAULT_ADJACENCY_CACHE_SIZE = 3;
@@ -50,11 +48,11 @@ const DEFAULT_ADJACENCY_CACHE_SIZE = 3;
export default class WarpGraph {
/**
* @private
- * @param {{ persistence: FullPersistence, graphName: string, writerId: string, gcPolicy?: Record, adjacencyCacheSize?: number, checkpointPolicy?: {every: number}, autoMaterialize?: boolean, onDeleteWithData?: 'reject'|'cascade'|'warn', logger?: import('../ports/LoggerPort.js').default, clock?: import('../ports/ClockPort.js').default, crypto?: import('../ports/CryptoPort.js').default, codec?: import('../ports/CodecPort.js').default, seekCache?: import('../ports/SeekCachePort.js').default, audit?: boolean }} options
+ * @param {{ persistence: CorePersistence, graphName: string, writerId: string, gcPolicy?: Record, adjacencyCacheSize?: number, checkpointPolicy?: {every: number}, autoMaterialize?: boolean, onDeleteWithData?: 'reject'|'cascade'|'warn', logger?: import('../ports/LoggerPort.js').default, clock?: import('../ports/ClockPort.js').default, crypto?: import('../ports/CryptoPort.js').default, codec?: import('../ports/CodecPort.js').default, seekCache?: import('../ports/SeekCachePort.js').default, audit?: boolean }} options
*/
constructor({ persistence, graphName, writerId, gcPolicy = {}, adjacencyCacheSize = DEFAULT_ADJACENCY_CACHE_SIZE, checkpointPolicy, autoMaterialize = true, onDeleteWithData = 'warn', logger, clock, crypto, codec, seekCache, audit = false }) {
- /** @type {FullPersistence} */
- this._persistence = /** @type {FullPersistence} */ (persistence);
+ /** @type {CorePersistence} */
+ this._persistence = /** @type {CorePersistence} */ (persistence);
/** @type {string} */
this._graphName = graphName;
@@ -243,7 +241,7 @@ export default class WarpGraph {
/**
* Opens a multi-writer graph.
*
- * @param {{ persistence: FullPersistence, graphName: string, writerId: string, gcPolicy?: Record, adjacencyCacheSize?: number, checkpointPolicy?: {every: number}, autoMaterialize?: boolean, onDeleteWithData?: 'reject'|'cascade'|'warn', logger?: import('../ports/LoggerPort.js').default, clock?: import('../ports/ClockPort.js').default, crypto?: import('../ports/CryptoPort.js').default, codec?: import('../ports/CodecPort.js').default, seekCache?: import('../ports/SeekCachePort.js').default, audit?: boolean }} options
+ * @param {{ persistence: CorePersistence, graphName: string, writerId: string, gcPolicy?: Record, adjacencyCacheSize?: number, checkpointPolicy?: {every: number}, autoMaterialize?: boolean, onDeleteWithData?: 'reject'|'cascade'|'warn', logger?: import('../ports/LoggerPort.js').default, clock?: import('../ports/ClockPort.js').default, crypto?: import('../ports/CryptoPort.js').default, codec?: import('../ports/CodecPort.js').default, seekCache?: import('../ports/SeekCachePort.js').default, audit?: boolean }} options
* @returns {Promise} The opened graph instance
* @throws {Error} If graphName, writerId, checkpointPolicy, or onDeleteWithData is invalid
*
@@ -299,7 +297,7 @@ export default class WarpGraph {
// Initialize audit service if enabled
if (graph._audit) {
graph._auditService = new AuditReceiptService({
- persistence: /** @type {import('./types/WarpPersistence.js').CorePersistence} */ (persistence),
+ persistence: /** @type {CorePersistence} */ (persistence),
graphName,
writerId,
codec: graph._codec,
@@ -330,7 +328,7 @@ export default class WarpGraph {
/**
* Gets the persistence adapter.
- * @returns {FullPersistence} The persistence adapter
+ * @returns {CorePersistence} The persistence adapter
*/
get persistence() {
return this._persistence;
diff --git a/src/domain/services/BisectService.js b/src/domain/services/BisectService.js
new file mode 100644
index 00000000..00dfddce
--- /dev/null
+++ b/src/domain/services/BisectService.js
@@ -0,0 +1,149 @@
+/**
+ * BisectService — binary search over WARP graph history.
+ *
+ * Given a known-good commit SHA and a known-bad commit SHA on a writer's
+ * patch chain, finds the first bad patch via binary search, calling a
+ * user-supplied test function at each midpoint.
+ *
+ * @module domain/services/BisectService
+ */
+
+/**
+ * Result of a bisect operation.
+ *
+ * Discriminated union on `result`:
+ * - `'found'`: firstBadPatch, writerId, lamport, steps, totalCandidates are present.
+ * - `'range-error'`: message is present.
+ *
+ * See `index.d.ts` for the canonical discriminated-union type.
+ *
+ * @typedef {Object} BisectResult
+ * @property {'found'|'range-error'} result - Discriminant tag
+ * @property {string} [firstBadPatch] - SHA of first bad patch (when result === 'found')
+ * @property {string} [writerId] - Writer who authored the bad patch (when result === 'found')
+ * @property {number} [lamport] - Lamport tick of the bad patch (when result === 'found')
+ * @property {number} [steps] - Number of bisect steps performed (when result === 'found')
+ * @property {number} [totalCandidates] - Initial candidate count (when result === 'found')
+ * @property {string} [message] - Human-readable error message (when result === 'range-error')
+ */
+
+/**
+ * Builds a "found" result from a candidate entry.
+ *
+ * @param {{writerId: string, entry: {sha: string, patch: {lamport: number}}, steps: number, totalCandidates: number}} opts
+ * @returns {BisectResult}
+ */
+function foundResult({ writerId, entry, steps, totalCandidates }) {
+ return {
+ result: 'found',
+ firstBadPatch: entry.sha,
+ writerId,
+ lamport: entry.patch.lamport,
+ steps,
+ totalCandidates,
+ };
+}
+
+/**
+ * Resolves the candidate slice between good and bad SHAs.
+ *
+ * @param {Array<{patch: {lamport: number}, sha: string}>} patches - Chronological patch chain
+ * @param {string} good - Known-good SHA
+ * @param {string} bad - Known-bad SHA
+ * @returns {{candidates: Array<{patch: {lamport: number}, sha: string}>}|{error: string}}
+ */
+function resolveCandidates(patches, good, bad) {
+ const goodIdx = patches.findIndex(p => p.sha === good);
+ const badIdx = patches.findIndex(p => p.sha === bad);
+
+ if (goodIdx === -1 || badIdx === -1) {
+ return { error: 'good or bad SHA not found in writer chain' };
+ }
+ if (goodIdx >= badIdx) {
+ return { error: 'good is not an ancestor of bad' };
+ }
+
+ // goodIdx < badIdx guarantees at least one candidate in the slice.
+ const candidates = patches.slice(goodIdx + 1, badIdx + 1);
+ return { candidates };
+}
+
+/**
+ * @typedef {Object} BisectGraph
+ * @property {(writerId: string) => Promise>} getWriterPatches
+ * @property {(opts: {ceiling: number}) => Promise} materialize
+ */
+
+export default class BisectService {
+ /**
+ * @param {{ graph: BisectGraph }} options
+ */
+ constructor({ graph }) {
+ this._graph = graph;
+ }
+
+ /**
+ * Runs bisect on a single writer's patch chain.
+ *
+ * @param {{ good: string, bad: string, writerId: string, testFn: (state: import('./JoinReducer.js').WarpStateV5, sha: string) => Promise }} options
+ * - good: SHA of known-good commit
+ * - bad: SHA of known-bad commit
+ * - writerId: writer whose chain to bisect
+ * - testFn: async function returning true if state is "good", false if "bad"
+ * @returns {Promise}
+ */
+ async run({ good, bad, writerId, testFn }) {
+ if (good === bad) {
+ return { result: 'range-error', message: 'good and bad SHAs are the same' };
+ }
+
+ const patches = await this._graph.getWriterPatches(writerId);
+ const resolved = resolveCandidates(patches, good, bad);
+
+ if ('error' in resolved) {
+ return { result: 'range-error', message: resolved.error };
+ }
+
+ const { candidates } = resolved;
+
+ // Single candidate — it must be the first bad patch
+ if (candidates.length === 1) {
+ return foundResult({ writerId, entry: candidates[0], steps: 0, totalCandidates: 1 });
+ }
+
+ // Binary search over the candidate range
+ const { index, steps } = await this._binarySearch(candidates, testFn);
+ return foundResult({ writerId, entry: candidates[index], steps, totalCandidates: candidates.length });
+ }
+
+ /**
+ * Performs binary search over candidates, materializing at each midpoint.
+ *
+ * @param {Array<{patch: {lamport: number}, sha: string}>} candidates
+ * @param {(state: import('./JoinReducer.js').WarpStateV5, sha: string) => Promise} testFn
+ * @returns {Promise<{index: number, steps: number}>}
+ * @private
+ */
+ async _binarySearch(candidates, testFn) {
+ let lo = 0;
+ let hi = candidates.length - 1;
+ let steps = 0;
+
+ while (lo < hi) {
+ const mid = Math.floor((lo + hi) / 2);
+ const candidate = candidates[mid];
+ steps++;
+
+ const state = await this._graph.materialize({ ceiling: candidate.patch.lamport });
+ const isGood = await testFn(state, candidate.sha);
+
+ if (isGood) {
+ lo = mid + 1;
+ } else {
+ hi = mid;
+ }
+ }
+
+ return { index: lo, steps };
+ }
+}
diff --git a/src/domain/services/ObserverView.js b/src/domain/services/ObserverView.js
index 751f8ae7..5f725f0c 100644
--- a/src/domain/services/ObserverView.js
+++ b/src/domain/services/ObserverView.js
@@ -16,23 +16,24 @@ import { decodeEdgeKey } from './KeyCodec.js';
import { matchGlob } from '../utils/matchGlob.js';
/**
- * Filters a properties Map based on expose and redact lists.
+ * Filters a properties Record based on expose and redact lists.
*
* - If `redact` contains a key, it is excluded (highest priority).
* - If `expose` is provided and non-empty, only keys in `expose` are included.
* - If `expose` is absent/empty, all non-redacted keys are included.
*
- * @param {Map} propsMap - The full properties Map
+ * @param {Record} propsRecord - The full properties object
* @param {string[]|undefined} expose - Whitelist of property keys to include
* @param {string[]|undefined} redact - Blacklist of property keys to exclude
- * @returns {Map} Filtered properties Map
+ * @returns {Record} Filtered properties object
*/
-function filterProps(propsMap, expose, redact) {
+function filterProps(propsRecord, expose, redact) {
const redactSet = redact && redact.length > 0 ? new Set(redact) : null;
const exposeSet = expose && expose.length > 0 ? new Set(expose) : null;
- const filtered = new Map();
- for (const [key, value] of propsMap) {
+ /** @type {Record} */
+ const filtered = {};
+ for (const [key, value] of Object.entries(propsRecord)) {
// Redact takes precedence
if (redactSet && redactSet.has(key)) {
continue;
@@ -41,7 +42,7 @@ function filterProps(propsMap, expose, redact) {
if (exposeSet && !exposeSet.has(key)) {
continue;
}
- filtered.set(key, value);
+ filtered[key] = value;
}
return filtered;
}
@@ -260,17 +261,17 @@ export default class ObserverView {
* the observer pattern.
*
* @param {string} nodeId - The node ID to get properties for
- * @returns {Promise|null>} Filtered properties Map, or null
+ * @returns {Promise|null>} Filtered properties object, or null
*/
async getNodeProps(nodeId) {
if (!matchGlob(this._matchPattern, nodeId)) {
return null;
}
- const propsMap = await this._graph.getNodeProps(nodeId);
- if (!propsMap) {
+ const propsRecord = await this._graph.getNodeProps(nodeId);
+ if (!propsRecord) {
return null;
}
- return filterProps(propsMap, this._expose, this._redact);
+ return filterProps(propsRecord, this._expose, this._redact);
}
// ===========================================================================
@@ -291,10 +292,8 @@ export default class ObserverView {
(e) => matchGlob(this._matchPattern, e.from) && matchGlob(this._matchPattern, e.to)
)
.map((e) => {
- const propsMap = new Map(Object.entries(e.props));
- const filtered = filterProps(propsMap, this._expose, this._redact);
- const filteredObj = Object.fromEntries(filtered);
- return { ...e, props: filteredObj };
+ const filtered = filterProps(e.props, this._expose, this._redact);
+ return { ...e, props: filtered };
});
}
@@ -312,7 +311,7 @@ export default class ObserverView {
* Cast safety: QueryBuilder requires the following methods from the
* graph-like object it wraps:
* - getNodes(): Promise (line ~680 in QueryBuilder)
- * - getNodeProps(nodeId): Promise (lines ~691, ~757, ~806 in QueryBuilder)
+ * - getNodeProps(nodeId): Promise (lines ~691, ~757, ~806 in QueryBuilder)
* - _materializeGraph(): Promise<{adjacency, stateHash}> (line ~678 in QueryBuilder)
* ObserverView implements all three: getNodes() at line ~254, getNodeProps() at line ~268,
* _materializeGraph() at line ~214.
diff --git a/src/domain/services/QueryBuilder.js b/src/domain/services/QueryBuilder.js
index 10a149ca..244656ce 100644
--- a/src/domain/services/QueryBuilder.js
+++ b/src/domain/services/QueryBuilder.js
@@ -255,21 +255,21 @@ function cloneValue(value) {
}
/**
- * Builds a frozen, deterministic snapshot of node properties from a Map.
+ * Builds a frozen, deterministic snapshot of node properties from a Record.
*
* Keys are sorted lexicographically for deterministic iteration order.
* Values are deep-cloned to prevent mutation of the original state.
*
- * @param {Map} propsMap - Map of property names to values
+ * @param {Record} propsRecord - Object of property names to values
* @returns {Readonly>} Frozen object with sorted keys and cloned values
* @private
*/
-function buildPropsSnapshot(propsMap) {
+function buildPropsSnapshot(propsRecord) {
/** @type {Record} */
- const props = {};
- const keys = [...propsMap.keys()].sort();
+ const props = Object.create(null);
+ const keys = Object.keys(propsRecord).sort();
for (const key of keys) {
- props[key] = cloneValue(propsMap.get(key));
+ props[key] = cloneValue(propsRecord[key]);
}
return deepFreeze(props);
}
@@ -307,12 +307,12 @@ function buildEdgesSnapshot(edges, directionKey) {
* The snapshot includes the node's ID, properties, outgoing edges, and incoming edges.
* All data is deeply frozen to prevent mutation.
*
- * @param {{ id: string, propsMap: Map, edgesOut: Array<{label: string, neighborId: string}>, edgesIn: Array<{label: string, neighborId: string}> }} params - Node data
+ * @param {{ id: string, propsRecord: Record, edgesOut: Array<{label: string, neighborId: string}>, edgesIn: Array<{label: string, neighborId: string}> }} params - Node data
* @returns {Readonly} Frozen node snapshot
* @private
*/
-function createNodeSnapshot({ id, propsMap, edgesOut, edgesIn }) {
- const props = buildPropsSnapshot(propsMap);
+function createNodeSnapshot({ id, propsRecord, edgesOut, edgesIn }) {
+ const props = buildPropsSnapshot(propsRecord);
const edgesOutSnapshot = buildEdgesSnapshot(edgesOut, 'to');
const edgesInSnapshot = buildEdgesSnapshot(edgesIn, 'from');
@@ -665,16 +665,16 @@ export default class QueryBuilder {
const pattern = this._pattern ?? DEFAULT_PATTERN;
// Per-run props memo to avoid redundant getNodeProps calls
- /** @type {Map>} */
+ /** @type {Map>} */
const propsMemo = new Map();
const getProps = async (/** @type {string} */ nodeId) => {
const cached = propsMemo.get(nodeId);
if (cached !== undefined) {
return cached;
}
- const propsMap = (await this._graph.getNodeProps(nodeId)) || new Map();
- propsMemo.set(nodeId, propsMap);
- return propsMap;
+ const propsRecord = (await this._graph.getNodeProps(nodeId)) || Object.create(null);
+ propsMemo.set(nodeId, propsRecord);
+ return propsRecord;
};
let workingSet;
@@ -683,12 +683,12 @@ export default class QueryBuilder {
for (const op of this._operations) {
if (op.type === 'where') {
const snapshots = await batchMap(workingSet, async (nodeId) => {
- const propsMap = await getProps(nodeId);
+ const propsRecord = await getProps(nodeId);
const edgesOut = adjacency.outgoing.get(nodeId) || [];
const edgesIn = adjacency.incoming.get(nodeId) || [];
return {
nodeId,
- snapshot: createNodeSnapshot({ id: nodeId, propsMap, edgesOut, edgesIn }),
+ snapshot: createNodeSnapshot({ id: nodeId, propsRecord, edgesOut, edgesIn }),
};
});
const predicate = /** @type {(node: QueryNodeSnapshot) => boolean} */ (op.fn);
@@ -747,8 +747,8 @@ export default class QueryBuilder {
entry.id = nodeId;
}
if (includeProps) {
- const propsMap = await getProps(nodeId);
- const props = buildPropsSnapshot(propsMap);
+ const propsRecord = await getProps(nodeId);
+ const props = buildPropsSnapshot(propsRecord);
if (selectFields || Object.keys(props).length > 0) {
entry.props = props;
}
@@ -768,7 +768,7 @@ export default class QueryBuilder {
*
* @param {string[]} workingSet - Array of matched node IDs
* @param {string} stateHash - Hash of the materialized state
- * @param {(nodeId: string) => Promise>} getProps - Memoized props fetcher
+ * @param {(nodeId: string) => Promise>} getProps - Memoized props fetcher
* @returns {Promise} Object containing stateHash and requested aggregation values
* @private
*/
@@ -798,10 +798,10 @@ export default class QueryBuilder {
// Pre-fetch all props with bounded concurrency
const propsList = await batchMap(workingSet, getProps);
- for (const propsMap of propsList) {
+ for (const propsRecord of propsList) {
for (const { segments, values } of propsByAgg.values()) {
/** @type {unknown} */
- let value = propsMap.get(segments[0]);
+ let value = propsRecord[segments[0]];
for (let i = 1; i < segments.length; i++) {
if (value && typeof value === 'object') {
value = /** @type {Record} */ (value)[segments[i]];
diff --git a/src/domain/warp/_wiredMethods.d.ts b/src/domain/warp/_wiredMethods.d.ts
index 42860fa1..93b702ac 100644
--- a/src/domain/warp/_wiredMethods.d.ts
+++ b/src/domain/warp/_wiredMethods.d.ts
@@ -160,7 +160,7 @@ declare module '../WarpGraph.js' {
export default interface WarpGraph {
// ── query.methods.js ──────────────────────────────────────────────────
hasNode(nodeId: string): Promise;
- getNodeProps(nodeId: string): Promise | null>;
+ getNodeProps(nodeId: string): Promise | null>;
getEdgeProps(from: string, to: string, label: string): Promise | null>;
neighbors(nodeId: string, direction?: 'outgoing' | 'incoming' | 'both', edgeLabel?: string): Promise>;
getStateSnapshot(): Promise;
@@ -172,8 +172,8 @@ declare module '../WarpGraph.js' {
translationCost(configA: ObserverConfig, configB: ObserverConfig): Promise;
// ── subscribe.methods.js ──────────────────────────────────────────────
- subscribe(options: { onChange: (diff: StateDiffResult) => void; onError?: (error: Error) => void; replay?: boolean }): { unsubscribe: () => void };
- watch(pattern: string, options: { onChange: (diff: StateDiffResult) => void; onError?: (error: Error) => void; poll?: number }): { unsubscribe: () => void };
+ subscribe(options: { onChange: (diff: StateDiffResult) => void; onError?: (error: unknown) => void; replay?: boolean }): { unsubscribe: () => void };
+ watch(pattern: string | string[], options: { onChange: (diff: StateDiffResult) => void; onError?: (error: unknown) => void; poll?: number }): { unsubscribe: () => void };
_notifySubscribers(diff: StateDiffResult, currentState: WarpStateV5): void;
// ── provenance.methods.js ─────────────────────────────────────────────
@@ -226,6 +226,7 @@ declare module '../WarpGraph.js' {
// ── patch.methods.js ──────────────────────────────────────────────────
createPatch(): Promise;
patch(build: (p: PatchBuilderV2) => void | Promise): Promise;
+ patchMany(...builds: Array<(p: PatchBuilderV2) => void | Promise>): Promise;
_nextLamport(): Promise<{ lamport: number; parentSha: string | null }>;
_loadWriterPatches(writerId: string, stopAtSha?: string | null): Promise>;
getWriterPatches(writerId: string, stopAtSha?: string | null): Promise>;
diff --git a/src/domain/warp/patch.methods.js b/src/domain/warp/patch.methods.js
index b72ee3e2..5ab95e5c 100644
--- a/src/domain/warp/patch.methods.js
+++ b/src/domain/warp/patch.methods.js
@@ -91,6 +91,38 @@ export async function patch(build) {
}
}
+/**
+ * Applies multiple patches sequentially.
+ *
+ * Each callback sees the state produced by the previous commit, so later
+ * patches can depend on earlier ones. Uses `graph.patch()` internally,
+ * inheriting CAS, eager re-materialize, and reentrancy-guard semantics.
+ *
+ * Returns an empty array (not an error) when called with no arguments.
+ *
+ * @public
+ * @since 13.0.0
+ * @this {import('../WarpGraph.js').default}
+ * @param {...((p: PatchBuilderV2) => void | Promise)} builds - Patch callbacks
+ * @returns {Promise} Commit SHAs in order of application
+ *
+ * @example
+ * const shas = await graph.patchMany(
+ * p => p.addNode('user:alice').setProperty('user:alice', 'name', 'Alice'),
+ * p => p.addNode('user:bob').setProperty('user:bob', 'name', 'Bob'),
+ * );
+ */
+export async function patchMany(...builds) {
+ if (builds.length === 0) {
+ return [];
+ }
+ const shas = [];
+ for (const build of builds) {
+ shas.push(await this.patch(build));
+ }
+ return shas;
+}
+
/**
* Gets the next lamport timestamp and current parent SHA for this writer.
* Reads from the current ref chain to determine values.
diff --git a/src/domain/warp/query.methods.js b/src/domain/warp/query.methods.js
index d9b602f3..77dea119 100644
--- a/src/domain/warp/query.methods.js
+++ b/src/domain/warp/query.methods.js
@@ -37,7 +37,7 @@ export async function hasNode(nodeId) {
*
* @this {import('../WarpGraph.js').default}
* @param {string} nodeId - The node ID to get properties for
- * @returns {Promise|null>} Map of property key → value, or null if node doesn't exist
+ * @returns {Promise|null>} Object of property key → value, or null if node doesn't exist
* @throws {import('../errors/QueryError.js').default} If no cached state exists (code: `E_NO_STATE`)
*/
export async function getNodeProps(nodeId) {
@@ -47,7 +47,10 @@ export async function getNodeProps(nodeId) {
if (this._propertyReader && this._logicalIndex?.isAlive(nodeId)) {
try {
const record = await this._propertyReader.getNodeProps(nodeId);
- return record ? new Map(Object.entries(record)) : new Map();
+ if (record !== null) {
+ return record;
+ }
+ // null → index has no data for this node; fall through to linear scan
} catch {
// Fall through to linear scan on index read failures.
}
@@ -60,11 +63,12 @@ export async function getNodeProps(nodeId) {
return null;
}
- const props = new Map();
+ /** @type {Record} */
+ const props = Object.create(null);
for (const [propKey, register] of s.prop) {
const decoded = decodePropKey(propKey);
if (decoded.nodeId === nodeId) {
- props.set(decoded.propKey, register.value);
+ props[decoded.propKey] = register.value;
}
}
@@ -98,7 +102,7 @@ export async function getEdgeProps(from, to, label) {
const birthEvent = s.edgeBirthEvent?.get(edgeKey);
/** @type {Record} */
- const props = {};
+ const props = Object.create(null);
for (const [propKey, register] of s.prop) {
if (!isEdgePropKey(propKey)) {
continue;
@@ -265,7 +269,7 @@ export async function getEdges() {
let bag = edgePropsByKey.get(ek);
if (!bag) {
- bag = {};
+ bag = Object.create(null);
edgePropsByKey.set(ek, bag);
}
bag[decoded.propKey] = register.value;
@@ -276,7 +280,7 @@ export async function getEdges() {
const { from, to, label } = decodeEdgeKey(edgeKey);
if (orsetContains(s.nodeAlive, from) &&
orsetContains(s.nodeAlive, to)) {
- const props = edgePropsByKey.get(edgeKey) || {};
+ const props = edgePropsByKey.get(edgeKey) || Object.create(null);
edges.push({ from, to, label, props });
}
}
@@ -351,8 +355,7 @@ export async function getContentOid(nodeId) {
if (!props) {
return null;
}
- // getNodeProps returns a Map — use .get() for property access
- const oid = props.get(CONTENT_PROPERTY_KEY);
+ const oid = props[CONTENT_PROPERTY_KEY];
return (typeof oid === 'string') ? oid : null;
}
diff --git a/src/domain/warp/subscribe.methods.js b/src/domain/warp/subscribe.methods.js
index c72fbe3a..251b1be2 100644
--- a/src/domain/warp/subscribe.methods.js
+++ b/src/domain/warp/subscribe.methods.js
@@ -21,6 +21,9 @@ import { matchGlob } from '../utils/matchGlob.js';
* Errors thrown by handlers are caught and forwarded to `onError` if provided.
* One handler's error does not prevent other handlers from being called.
*
+ * @public
+ * @since 13.0.0 (stable)
+ * @stability stable
* @this {import('../WarpGraph.js').default}
* @param {{ onChange: (diff: import('../services/StateDiff.js').StateDiffResult) => void, onError?: (error: unknown) => void, replay?: boolean }} options - Subscription options
* @returns {{unsubscribe: () => void}} Subscription handle
@@ -98,6 +101,9 @@ export function subscribe({ onChange, onError, replay = false }) {
* if the frontier has changed (e.g., remote writes detected). The poll interval must
* be at least 1000ms.
*
+ * @public
+ * @since 13.0.0 (stable)
+ * @stability stable
* @this {import('../WarpGraph.js').default}
* @param {string|string[]} pattern - Glob pattern(s) (e.g., 'user:*', 'order:123', '*')
* @param {{ onChange: (diff: import('../services/StateDiff.js').StateDiffResult) => void, onError?: (error: unknown) => void, poll?: number }} options - Watch options
diff --git a/test/integration/api/edge-cases.test.js b/test/integration/api/edge-cases.test.js
index 6ed74702..2b0c9d05 100644
--- a/test/integration/api/edge-cases.test.js
+++ b/test/integration/api/edge-cases.test.js
@@ -63,7 +63,7 @@ describe('API: Edge Cases', () => {
expect(nodes).toContain('user:日本語');
const props = await graph.getNodeProps('user:café');
- expect(props.get('city')).toBe('Paris');
+ expect(props.city).toBe('Paris');
});
it('large property values are stored and retrieved', async () => {
@@ -77,7 +77,7 @@ describe('API: Edge Cases', () => {
await graph.materialize();
const props = await graph.getNodeProps('big');
- expect(props.get('data')).toBe(bigValue);
+ expect(props.data).toBe(bigValue);
});
it('numeric and boolean property values', async () => {
@@ -92,8 +92,8 @@ describe('API: Edge Cases', () => {
await graph.materialize();
const props = await graph.getNodeProps('n');
- expect(props.get('count')).toBe(42);
- expect(props.get('pi')).toBeCloseTo(3.14);
- expect(props.get('active')).toBe(true);
+ expect(props.count).toBe(42);
+ expect(props.pi).toBeCloseTo(3.14);
+ expect(props.active).toBe(true);
});
});
diff --git a/test/integration/api/fork.test.js b/test/integration/api/fork.test.js
index 50518c2e..a5dcde0c 100644
--- a/test/integration/api/fork.test.js
+++ b/test/integration/api/fork.test.js
@@ -84,6 +84,6 @@ describe('API: Fork', () => {
expect(nodes).toContain('new-node');
const props = await forked.getNodeProps('new-node');
- expect(props.get('added-by')).toBe('fork-return');
+ expect(props['added-by']).toBe('fork-return');
});
});
diff --git a/test/integration/api/lifecycle.test.js b/test/integration/api/lifecycle.test.js
index c492233c..c8bf8a67 100644
--- a/test/integration/api/lifecycle.test.js
+++ b/test/integration/api/lifecycle.test.js
@@ -59,8 +59,8 @@ describe('API: Lifecycle', () => {
await graph.materialize();
const props = await graph.getNodeProps('user:alice');
- expect(props.get('name')).toBe('Alice');
- expect(props.get('role')).toBe('engineer');
+ expect(props.name).toBe('Alice');
+ expect(props.role).toBe('engineer');
});
it('builds state across multiple patches', async () => {
diff --git a/test/runtime/deno/lifecycle.test.ts b/test/runtime/deno/lifecycle.test.ts
index fec5f41a..a56e401f 100644
--- a/test/runtime/deno/lifecycle.test.ts
+++ b/test/runtime/deno/lifecycle.test.ts
@@ -36,7 +36,7 @@ Deno.test("lifecycle: creates edges and retrieves them", async () => {
}
});
-Deno.test("lifecycle: node properties via Map", async () => {
+Deno.test("lifecycle: node properties", async () => {
const repo = await createTestRepo("lifecycle-props");
try {
const graph = await repo.openGraph("test", "alice");
@@ -46,7 +46,7 @@ Deno.test("lifecycle: node properties via Map", async () => {
await graph.materialize();
const props = await graph.getNodeProps("n");
- assertEquals(props.get("k"), "v");
+ assertEquals(props?.k, "v");
} finally {
await repo.cleanup();
}
diff --git a/test/type-check/consumer.ts b/test/type-check/consumer.ts
index e2b78c54..1d3aef63 100644
--- a/test/type-check/consumer.ts
+++ b/test/type-check/consumer.ts
@@ -171,7 +171,7 @@ const atState: WarpStateV5 = await graph.materializeAt('abc123');
// ---- query methods ----
const nodes: string[] = await graph.getNodes();
const hasIt: boolean = await graph.hasNode('n1');
-const props: Map | null = await graph.getNodeProps('n1');
+const props: Record | null = await graph.getNodeProps('n1');
const edgeProps: Record | null = await graph.getEdgeProps('n1', 'n2', 'knows');
const neighbors: Array<{ nodeId: string; label: string; direction: 'outgoing' | 'incoming' }> = await graph.neighbors('n1');
const propCount: number = await graph.getPropertyCount();
@@ -194,7 +194,7 @@ const qb: QueryBuilder = graph.query();
const obs: ObserverView = await graph.observer('obs1', { match: '*' });
const obsNodes: string[] = await obs.getNodes();
const obsHas: boolean = await obs.hasNode('n1');
-const obsProps: Map | null = await obs.getNodeProps('n1');
+const obsProps: Record | null = await obs.getNodeProps('n1');
const obsEdges: Array<{ from: string; to: string; label: string; props: Record }> = await obs.getEdges();
const obsQb: QueryBuilder = obs.query();
const obsTraverse: LogicalTraversal = obs.traverse;
diff --git a/test/unit/cli/schemas.test.js b/test/unit/cli/schemas.test.js
index 9badcff9..dc586a94 100644
--- a/test/unit/cli/schemas.test.js
+++ b/test/unit/cli/schemas.test.js
@@ -1,5 +1,6 @@
import { describe, it, expect } from 'vitest';
import {
+ bisectSchema,
doctorSchema,
historySchema,
installHooksSchema,
@@ -10,6 +11,42 @@ import {
seekSchema,
} from '../../../bin/cli/schemas.js';
+describe('bisectSchema', () => {
+ const VALID_SHA = 'a'.repeat(40);
+ const VALID_SHA_2 = 'b'.repeat(40);
+
+ it('accepts valid 40-char hex SHAs', () => {
+ const result = bisectSchema.parse({ good: VALID_SHA, bad: VALID_SHA_2, test: 'exit 0' });
+ expect(result.good).toBe(VALID_SHA);
+ expect(result.bad).toBe(VALID_SHA_2);
+ expect(result.test).toBe('exit 0');
+ });
+
+ it('rejects short SHA for --good', () => {
+ expect(() => bisectSchema.parse({ good: 'abc123', bad: VALID_SHA_2, test: 'exit 0' })).toThrow(/40-character hex SHA/);
+ });
+
+ it('rejects short SHA for --bad', () => {
+ expect(() => bisectSchema.parse({ good: VALID_SHA, bad: 'abc123', test: 'exit 0' })).toThrow(/40-character hex SHA/);
+ });
+
+ it('rejects uppercase hex', () => {
+ expect(() => bisectSchema.parse({ good: 'A'.repeat(40), bad: VALID_SHA_2, test: 'exit 0' })).toThrow(/40-character hex SHA/);
+ });
+
+ it('rejects empty --good', () => {
+ expect(() => bisectSchema.parse({ good: '', bad: VALID_SHA_2, test: 'exit 0' })).toThrow();
+ });
+
+ it('rejects empty --test', () => {
+ expect(() => bisectSchema.parse({ good: VALID_SHA, bad: VALID_SHA_2, test: '' })).toThrow();
+ });
+
+ it('rejects unknown keys', () => {
+ expect(() => bisectSchema.parse({ good: VALID_SHA, bad: VALID_SHA_2, test: 'exit 0', unknown: true })).toThrow();
+ });
+});
+
describe('doctorSchema', () => {
it('defaults strict to false', () => {
const result = doctorSchema.parse({});
diff --git a/test/unit/domain/WarpGraph.edgeProps.test.js b/test/unit/domain/WarpGraph.edgeProps.test.js
index 7005142c..511662d4 100644
--- a/test/unit/domain/WarpGraph.edgeProps.test.js
+++ b/test/unit/domain/WarpGraph.edgeProps.test.js
@@ -210,9 +210,9 @@ describe('WarpGraph edge properties', () => {
});
const nodeProps = await graph.getNodeProps('user:alice');
- expect(nodeProps.get('name')).toBe('Alice');
- expect(nodeProps.has('weight')).toBe(false);
- expect(nodeProps.size).toBe(1);
+ expect(nodeProps.name).toBe('Alice');
+ expect('weight' in nodeProps).toBe(false);
+ expect(Object.keys(nodeProps).length).toBe(1);
});
// ============================================================================
diff --git a/test/unit/domain/WarpGraph.invalidation.test.js b/test/unit/domain/WarpGraph.invalidation.test.js
index ac1d833d..daeffe99 100644
--- a/test/unit/domain/WarpGraph.invalidation.test.js
+++ b/test/unit/domain/WarpGraph.invalidation.test.js
@@ -111,7 +111,7 @@ describe('WarpGraph dirty flag + eager re-materialize (AP/INVAL/1 + AP/INVAL/2)'
const props = await graph.getNodeProps('test:node');
expect(props).not.toBeNull();
- expect(props.get('name')).toBe('Alice');
+ expect(props.name).toBe('Alice');
});
it('multiple sequential commits with _cachedState keep state fresh', async () => {
diff --git a/test/unit/domain/WarpGraph.lazyMaterialize.test.js b/test/unit/domain/WarpGraph.lazyMaterialize.test.js
index 63c12395..d9d9c8a0 100644
--- a/test/unit/domain/WarpGraph.lazyMaterialize.test.js
+++ b/test/unit/domain/WarpGraph.lazyMaterialize.test.js
@@ -378,7 +378,7 @@ describe('AP/LAZY/2: auto-materialize guards on query methods', () => {
expect(edges[0]).toEqual({ from: 'test:alice', to: 'test:bob', label: 'knows', props: {} });
const props = await graph.getNodeProps('test:alice');
- expect(props.get('name')).toBe('Alice');
+ expect(props.name).toBe('Alice');
const outgoing = await graph.neighbors('test:alice', 'outgoing');
expect(outgoing).toHaveLength(1);
diff --git a/test/unit/domain/WarpGraph.noCoordination.test.js b/test/unit/domain/WarpGraph.noCoordination.test.js
index 137cecb4..50cd64bb 100644
--- a/test/unit/domain/WarpGraph.noCoordination.test.js
+++ b/test/unit/domain/WarpGraph.noCoordination.test.js
@@ -175,7 +175,7 @@ describe('No-coordination regression suite', () => {
// B's own state should reflect its own mutation
const propsB = await graphB.getNodeProps('node:shared');
- expect(propsB?.get('value')).toBe('from-B');
+ expect(propsB?.value).toBe('from-B');
// A fresh reader that sees both writers must also resolve to B's value
const graphReader = await WarpGraph.open({
@@ -187,7 +187,7 @@ describe('No-coordination regression suite', () => {
await graphReader.syncCoverage();
await graphReader.materialize();
const propsReader = await graphReader.getNodeProps('node:shared');
- expect(propsReader?.get('value')).toBe('from-B');
+ expect(propsReader?.value).toBe('from-B');
} finally {
await repo.cleanup();
}
@@ -268,7 +268,7 @@ describe('No-coordination regression suite', () => {
// B should see its own mutation
const propsB = await graphB.getNodeProps('node:1');
- expect(propsB?.get('type')).toBe('campaign');
+ expect(propsB?.type).toBe('campaign');
// A fresh reader materializing both chains must resolve to B's value
const reader = await WarpGraph.open({
@@ -280,7 +280,7 @@ describe('No-coordination regression suite', () => {
await reader.syncCoverage();
await reader.materialize();
const propsReader = await reader.getNodeProps('node:1');
- expect(propsReader?.get('type')).toBe('campaign');
+ expect(propsReader?.type).toBe('campaign');
} finally {
await repo.cleanup();
}
diff --git a/test/unit/domain/WarpGraph.patchMany.test.js b/test/unit/domain/WarpGraph.patchMany.test.js
new file mode 100644
index 00000000..04ba9208
--- /dev/null
+++ b/test/unit/domain/WarpGraph.patchMany.test.js
@@ -0,0 +1,150 @@
+import { describe, it, expect } from 'vitest';
+import WarpGraph from '../../../src/domain/WarpGraph.js';
+import { createGitRepo } from '../../helpers/warpGraphTestUtils.js';
+
+describe('WarpGraph.patchMany()', () => {
+ it('returns empty array when called with no arguments', async () => {
+ const repo = await createGitRepo('patchMany-empty');
+ try {
+ const graph = await WarpGraph.open({
+ persistence: repo.persistence,
+ graphName: 'test',
+ writerId: 'writer-a',
+ autoMaterialize: true,
+ });
+ const shas = await graph.patchMany();
+ expect(shas).toEqual([]);
+ } finally {
+ await repo.cleanup();
+ }
+ });
+
+ it('applies a single patch and returns its SHA', async () => {
+ const repo = await createGitRepo('patchMany-single');
+ try {
+ const graph = await WarpGraph.open({
+ persistence: repo.persistence,
+ graphName: 'test',
+ writerId: 'writer-a',
+ autoMaterialize: true,
+ });
+ const shas = await graph.patchMany(
+ (p) => { p.addNode('n:1').setProperty('n:1', 'k', 'v'); },
+ );
+ expect(shas).toHaveLength(1);
+ expect(typeof shas[0]).toBe('string');
+ expect(shas[0]).toHaveLength(40);
+
+ const props = await graph.getNodeProps('n:1');
+ expect(props?.k).toBe('v');
+ } finally {
+ await repo.cleanup();
+ }
+ });
+
+ it('applies multiple patches sequentially', async () => {
+ const repo = await createGitRepo('patchMany-multi');
+ try {
+ const graph = await WarpGraph.open({
+ persistence: repo.persistence,
+ graphName: 'test',
+ writerId: 'writer-a',
+ autoMaterialize: true,
+ });
+ const shas = await graph.patchMany(
+ (p) => { p.addNode('n:1').setProperty('n:1', 'role', 'admin'); },
+ (p) => { p.addNode('n:2').setProperty('n:2', 'role', 'user'); },
+ (p) => { p.addEdge('n:1', 'n:2', 'manages'); },
+ );
+ expect(shas).toHaveLength(3);
+
+ const nodes = await graph.getNodes();
+ expect(nodes.sort()).toEqual(['n:1', 'n:2']);
+
+ const edges = await graph.getEdges();
+ expect(edges).toHaveLength(1);
+ expect(edges[0].from).toBe('n:1');
+ expect(edges[0].to).toBe('n:2');
+ } finally {
+ await repo.cleanup();
+ }
+ });
+
+ it('each callback sees state from previous patches', async () => {
+ const repo = await createGitRepo('patchMany-sees-prior');
+ try {
+ const graph = await WarpGraph.open({
+ persistence: repo.persistence,
+ graphName: 'test',
+ writerId: 'writer-a',
+ autoMaterialize: true,
+ });
+
+ // First patch creates node, second patch sets a property that depends on it
+ const shas = await graph.patchMany(
+ (p) => { p.addNode('n:1').setProperty('n:1', 'step', 1); },
+ async (p) => {
+ // Verify node from first patch is visible
+ const has = await graph.hasNode('n:1');
+ expect(has).toBe(true);
+ p.setProperty('n:1', 'step', 2);
+ },
+ );
+ expect(shas).toHaveLength(2);
+
+ const props = await graph.getNodeProps('n:1');
+ expect(props?.step).toBe(2);
+ } finally {
+ await repo.cleanup();
+ }
+ });
+
+ it('propagates error from failing callback without applying further patches', async () => {
+ const repo = await createGitRepo('patchMany-error');
+ try {
+ const graph = await WarpGraph.open({
+ persistence: repo.persistence,
+ graphName: 'test',
+ writerId: 'writer-a',
+ autoMaterialize: true,
+ });
+
+ await expect(
+ graph.patchMany(
+ (p) => { p.addNode('n:1'); },
+ () => { throw new Error('deliberate'); },
+ (p) => { p.addNode('n:3'); }, // should never run
+ ),
+ ).rejects.toThrow('deliberate');
+
+ // First patch was applied; third was not
+ expect(await graph.hasNode('n:1')).toBe(true);
+ expect(await graph.hasNode('n:3')).toBe(false);
+ } finally {
+ await repo.cleanup();
+ }
+ });
+
+ it('triggers reentrancy guard when nesting patch inside patchMany callback', async () => {
+ const repo = await createGitRepo('patchMany-reentrant');
+ try {
+ const graph = await WarpGraph.open({
+ persistence: repo.persistence,
+ graphName: 'test',
+ writerId: 'writer-a',
+ autoMaterialize: true,
+ });
+
+ await expect(
+ graph.patchMany(
+ async () => {
+ // Nesting patch() inside patchMany should trigger reentrancy guard
+ await graph.patch((p) => { p.addNode('sneaky'); });
+ },
+ ),
+ ).rejects.toThrow(/not reentrant/);
+ } finally {
+ await repo.cleanup();
+ }
+ });
+}, { timeout: 30000 });
diff --git a/test/unit/domain/WarpGraph.query.test.js b/test/unit/domain/WarpGraph.query.test.js
index ad97f6b2..0182f241 100644
--- a/test/unit/domain/WarpGraph.query.test.js
+++ b/test/unit/domain/WarpGraph.query.test.js
@@ -71,14 +71,13 @@ describe('WarpGraph Query API', () => {
expect(await graph.getNodeProps('user:nonexistent')).toBe(null);
});
- it('returns empty map for node with no props', async () => {
+ it('returns empty record for node with no props', async () => {
await graph.materialize();
const state = /** @type {any} */ (graph)._cachedState;
orsetAdd(state.nodeAlive, 'user:alice', createDot('w1', 1));
const props = await graph.getNodeProps('user:alice');
- expect(props).toBeInstanceOf(Map);
- expect(props.size).toBe(0);
+ expect(Object.keys(props).length).toBe(0);
});
it('returns props for node with properties', async () => {
@@ -97,8 +96,8 @@ describe('WarpGraph Query API', () => {
state.prop.set(propKey2, { value: 30, lamport: 1, writerId: 'w1' });
const props = await graph.getNodeProps('user:alice');
- expect(props.get('name')).toBe('Alice');
- expect(props.get('age')).toBe(30);
+ expect(props.name).toBe('Alice');
+ expect(props.age).toBe(30);
});
it('falls back to linear scan when indexed property read throws', async () => {
@@ -117,8 +116,7 @@ describe('WarpGraph Query API', () => {
};
const props = await graph.getNodeProps('user:alice');
- expect(props).toBeInstanceOf(Map);
- expect(props.get('name')).toBe('Alice');
+ expect(props.name).toBe('Alice');
});
});
diff --git a/test/unit/domain/WarpGraph.subscribe.test.js b/test/unit/domain/WarpGraph.subscribe.test.js
index 9adc932a..d2eb61a9 100644
--- a/test/unit/domain/WarpGraph.subscribe.test.js
+++ b/test/unit/domain/WarpGraph.subscribe.test.js
@@ -274,6 +274,61 @@ describe('WarpGraph.subscribe() (PL/SUB/1)', () => {
// Handler was called
expect(onChange).toHaveBeenCalledTimes(1);
});
+
+ it('handler A cross-unsubscribes handler B mid-callback; B still fires for current notification', async () => {
+ /** @type {any} */
+ let subB;
+
+ const onChangeB = vi.fn();
+ const onChangeA = vi.fn(() => {
+ // A removes B mid-iteration
+ subB.unsubscribe();
+ });
+
+ // Subscribe A first, then B — A fires first in snapshot order
+ graph.subscribe({ onChange: onChangeA });
+ subB = graph.subscribe({ onChange: onChangeB });
+
+ await (await graph.createPatch()).addNode('user:alice').commit();
+ await graph.materialize();
+
+ // Both fired for the current notification (snapshot iteration)
+ expect(onChangeA).toHaveBeenCalledTimes(1);
+ expect(onChangeB).toHaveBeenCalledTimes(1);
+
+ // On the next materialize, B must NOT fire (it was unsubscribed)
+ await (await graph.createPatch()).addNode('user:bob').commit();
+ await graph.materialize();
+
+ expect(onChangeA).toHaveBeenCalledTimes(2);
+ expect(onChangeB).toHaveBeenCalledTimes(1); // Still 1 — no second call
+ });
+
+ it('subscribing a new handler C during callback — C does not fire for current diff', async () => {
+ const onChangeC = vi.fn();
+
+ const onChangeA = vi.fn(() => {
+ // A subscribes C mid-notification
+ graph.subscribe({ onChange: onChangeC });
+ });
+
+ graph.subscribe({ onChange: onChangeA });
+
+ await (await graph.createPatch()).addNode('user:alice').commit();
+ await graph.materialize();
+
+ // A fired, but C was added after the snapshot — C should NOT fire
+ expect(onChangeA).toHaveBeenCalledTimes(1);
+ expect(onChangeC).not.toHaveBeenCalled();
+
+ // On the next materialize, C SHOULD fire
+ await (await graph.createPatch()).addNode('user:bob').commit();
+ await graph.materialize();
+
+ expect(onChangeA).toHaveBeenCalledTimes(2);
+ expect(onChangeC).toHaveBeenCalledTimes(1);
+ expect(onChangeC.mock.calls[0][0].nodes.added).toContain('user:bob');
+ });
});
});
diff --git a/test/unit/domain/WarpGraph.watch.test.js b/test/unit/domain/WarpGraph.watch.test.js
index dbd53e10..e32a3256 100644
--- a/test/unit/domain/WarpGraph.watch.test.js
+++ b/test/unit/domain/WarpGraph.watch.test.js
@@ -354,6 +354,36 @@ describe('WarpGraph.watch() (PL/WATCH/1)', () => {
expect(onChange1).toHaveBeenCalledTimes(1);
expect(onChange2).toHaveBeenCalledTimes(1);
});
+
+ it('unsubscribe during onError callback prevents infinite loop and stops future notifications', async () => {
+ /** @type {any} */
+ let sub;
+
+ const onChange = vi.fn(() => {
+ throw new Error('onChange always throws');
+ });
+ const onError = vi.fn(() => {
+ // Unsubscribe from within onError to stop future notifications
+ sub.unsubscribe();
+ });
+
+ sub = graph.watch('user:*', { onChange, onError });
+
+ await (await graph.createPatch()).addNode('user:alice').commit();
+
+ // Should not throw or loop infinitely
+ await expect(graph.materialize()).resolves.toBeDefined();
+
+ expect(onChange).toHaveBeenCalledTimes(1);
+ expect(onError).toHaveBeenCalledTimes(1);
+
+ // Subsequent materialize must NOT fire the handler (it was unsubscribed)
+ await (await graph.createPatch()).addNode('user:bob').commit();
+ await graph.materialize();
+
+ expect(onChange).toHaveBeenCalledTimes(1); // Still 1 — no second call
+ expect(onError).toHaveBeenCalledTimes(1); // Still 1 — no second call
+ });
});
describe('edge cases', () => {
diff --git a/test/unit/domain/WarpGraph.writerInvalidation.test.js b/test/unit/domain/WarpGraph.writerInvalidation.test.js
index e141c244..e72f0f7b 100644
--- a/test/unit/domain/WarpGraph.writerInvalidation.test.js
+++ b/test/unit/domain/WarpGraph.writerInvalidation.test.js
@@ -127,7 +127,7 @@ describe('WarpGraph Writer invalidation (AP/INVAL/3)', () => {
const props = await graph.getNodeProps('test:node');
expect(props).not.toBeNull();
- expect(props.get('name')).toBe('Alice');
+ expect(props.name).toBe('Alice');
});
// ── Multiple sequential writer commits ───────────────────────────
diff --git a/test/unit/domain/__snapshots__/WarpGraph.apiSurface.test.js.snap b/test/unit/domain/__snapshots__/WarpGraph.apiSurface.test.js.snap
index c0de36aa..5e7e26a3 100644
--- a/test/unit/domain/__snapshots__/WarpGraph.apiSurface.test.js.snap
+++ b/test/unit/domain/__snapshots__/WarpGraph.apiSurface.test.js.snap
@@ -327,6 +327,11 @@ exports[`WarpGraph API surface > all prototype methods have correct property des
"enumerable": false,
"type": "method",
},
+ "patchMany": {
+ "configurable": true,
+ "enumerable": false,
+ "type": "method",
+ },
"patchesFor": {
"configurable": true,
"enumerable": false,
@@ -430,7 +435,7 @@ exports[`WarpGraph API surface > all prototype methods have correct property des
}
`;
-exports[`WarpGraph API surface > prototype method count matches snapshot 1`] = `85`;
+exports[`WarpGraph API surface > prototype method count matches snapshot 1`] = `86`;
exports[`WarpGraph API surface > prototype methods match snapshot 1`] = `
[
@@ -499,6 +504,7 @@ exports[`WarpGraph API surface > prototype methods match snapshot 1`] = `
"observer",
"onDeleteWithData",
"patch",
+ "patchMany",
"patchesFor",
"persistence",
"processSyncRequest",
diff --git a/test/unit/domain/services/BisectService.test.js b/test/unit/domain/services/BisectService.test.js
new file mode 100644
index 00000000..f8ffa742
--- /dev/null
+++ b/test/unit/domain/services/BisectService.test.js
@@ -0,0 +1,293 @@
+import { describe, it, expect } from 'vitest';
+import WarpGraph from '../../../../src/domain/WarpGraph.js';
+import BisectService from '../../../../src/domain/services/BisectService.js';
+import { orsetContains } from '../../../../src/domain/crdt/ORSet.js';
+import { createGitRepo } from '../../../helpers/warpGraphTestUtils.js';
+
+describe('BisectService', () => {
+ it('vector 1: linear chain — finds first bad patch', async () => {
+ const repo = await createGitRepo('bisect-linear');
+ try {
+ const graph = await WarpGraph.open({
+ persistence: repo.persistence,
+ graphName: 'test',
+ writerId: 'w1',
+ autoMaterialize: true,
+ });
+
+ // Create 5 patches: A, B, C (introduces 'bug'), D, E
+ const shas = [];
+ shas.push(await graph.patch(p => { p.addNode('n:1'); })); // A
+ shas.push(await graph.patch(p => { p.addNode('n:2'); })); // B
+ shas.push(await graph.patch(p => { p.addNode('bug'); })); // C — first bad
+ shas.push(await graph.patch(p => { p.addNode('n:3'); })); // D
+ shas.push(await graph.patch(p => { p.addNode('n:4'); })); // E
+
+ const bisect = new BisectService({ graph });
+ const result = await bisect.run({
+ good: shas[0], // A
+ bad: shas[4], // E
+ writerId: 'w1',
+ testFn: async (state) => {
+ // "good" means 'bug' node is NOT alive
+ return !orsetContains(state.nodeAlive, 'bug');
+ },
+ });
+
+ expect(result.result).toBe('found');
+ expect(result.firstBadPatch).toBe(shas[2]); // C
+ expect(result.writerId).toBe('w1');
+ expect(result.steps).toBeLessThanOrEqual(2);
+ expect(result.totalCandidates).toBe(4); // B, C, D, E
+ } finally {
+ await repo.cleanup();
+ }
+ }, { timeout: 30000 });
+
+ it('vector 2: same good and bad — range-error', async () => {
+ const repo = await createGitRepo('bisect-same');
+ try {
+ const graph = await WarpGraph.open({
+ persistence: repo.persistence,
+ graphName: 'test',
+ writerId: 'w1',
+ autoMaterialize: true,
+ });
+
+ const sha = await graph.patch(p => { p.addNode('n:1'); });
+
+ const bisect = new BisectService({ graph });
+ const result = await bisect.run({
+ good: sha,
+ bad: sha,
+ writerId: 'w1',
+ testFn: async () => true,
+ });
+
+ expect(result.result).toBe('range-error');
+ expect(result.message).toBe('good and bad SHAs are the same');
+ } finally {
+ await repo.cleanup();
+ }
+ }, { timeout: 30000 });
+
+ it('vector 3: single step — A→B, good=A bad=B → result=B, 0 steps', async () => {
+ const repo = await createGitRepo('bisect-single');
+ try {
+ const graph = await WarpGraph.open({
+ persistence: repo.persistence,
+ graphName: 'test',
+ writerId: 'w1',
+ autoMaterialize: true,
+ });
+
+ const shaA = await graph.patch(p => { p.addNode('n:1'); }); // A — good
+ const shaB = await graph.patch(p => { p.addNode('bug'); }); // B — bad
+
+ const bisect = new BisectService({ graph });
+ const result = await bisect.run({
+ good: shaA,
+ bad: shaB,
+ writerId: 'w1',
+ testFn: async (state) => {
+ return !orsetContains(state.nodeAlive, 'bug');
+ },
+ });
+
+ expect(result.result).toBe('found');
+ expect(result.firstBadPatch).toBe(shaB);
+ expect(result.writerId).toBe('w1');
+ expect(result.steps).toBe(0);
+ expect(result.totalCandidates).toBe(1);
+ } finally {
+ await repo.cleanup();
+ }
+ }, { timeout: 30000 });
+
+ it('vector 4: good is not ancestor of bad — range-error', async () => {
+ const repo = await createGitRepo('bisect-reversed');
+ try {
+ const graph = await WarpGraph.open({
+ persistence: repo.persistence,
+ graphName: 'test',
+ writerId: 'w1',
+ autoMaterialize: true,
+ });
+
+ const shaA = await graph.patch(p => { p.addNode('n:1'); });
+ const shaB = await graph.patch(p => { p.addNode('n:2'); });
+
+ const bisect = new BisectService({ graph });
+ // Reversed: good=B (later), bad=A (earlier)
+ const result = await bisect.run({
+ good: shaB,
+ bad: shaA,
+ writerId: 'w1',
+ testFn: async () => true,
+ });
+
+ expect(result.result).toBe('range-error');
+ expect(result.message).toBe('good is not an ancestor of bad');
+ } finally {
+ await repo.cleanup();
+ }
+ }, { timeout: 30000 });
+
+ it('vector 5: SHA not found in chain — range-error', async () => {
+ const repo = await createGitRepo('bisect-notfound');
+ try {
+ const graph = await WarpGraph.open({
+ persistence: repo.persistence,
+ graphName: 'test',
+ writerId: 'w1',
+ autoMaterialize: true,
+ });
+
+ const sha = await graph.patch(p => { p.addNode('n:1'); });
+ const fakeSha = 'deadbeef'.repeat(5);
+
+ const bisect = new BisectService({ graph });
+ const result = await bisect.run({
+ good: sha,
+ bad: fakeSha,
+ writerId: 'w1',
+ testFn: async () => true,
+ });
+
+ expect(result.result).toBe('range-error');
+ expect(result.message).toBe('good or bad SHA not found in writer chain');
+ } finally {
+ await repo.cleanup();
+ }
+ }, { timeout: 30000 });
+
+ it('vector 6: testFn receives candidate SHA', async () => {
+ const repo = await createGitRepo('bisect-sha-arg');
+ try {
+ const graph = await WarpGraph.open({
+ persistence: repo.persistence,
+ graphName: 'test',
+ writerId: 'w1',
+ autoMaterialize: true,
+ });
+
+ const shas = [];
+ shas.push(await graph.patch(p => { p.addNode('n:1'); })); // A — good
+ shas.push(await graph.patch(p => { p.addNode('n:2'); })); // B
+ shas.push(await graph.patch(p => { p.addNode('bug'); })); // C — first bad
+ shas.push(await graph.patch(p => { p.addNode('n:3'); })); // D — bad
+
+ /** @type {string[]} */ const observedShas = [];
+
+ const bisect = new BisectService({ graph });
+ const result = await bisect.run({
+ good: shas[0],
+ bad: shas[3],
+ writerId: 'w1',
+ testFn: async (state, sha) => {
+ observedShas.push(sha);
+ return !orsetContains(state.nodeAlive, 'bug');
+ },
+ });
+
+ expect(result.result).toBe('found');
+ expect(result.firstBadPatch).toBe(shas[2]); // C
+ // Every SHA passed to testFn must be a real candidate SHA
+ for (const observed of observedShas) {
+ expect(shas.slice(1)).toContain(observed);
+ }
+ } finally {
+ await repo.cleanup();
+ }
+ }, { timeout: 30000 });
+
+ it('vector 7: all-bad — first candidate after good is the first bad patch', async () => {
+ const repo = await createGitRepo('bisect-all-bad');
+ try {
+ const graph = await WarpGraph.open({
+ persistence: repo.persistence,
+ graphName: 'test',
+ writerId: 'w1',
+ autoMaterialize: true,
+ });
+
+ const shas = [];
+ shas.push(await graph.patch(p => { p.addNode('n:1'); })); // A — good
+ shas.push(await graph.patch(p => { p.addNode('n:2'); })); // B — bad
+ shas.push(await graph.patch(p => { p.addNode('n:3'); })); // C — bad
+ shas.push(await graph.patch(p => { p.addNode('n:4'); })); // D — bad
+
+ const bisect = new BisectService({ graph });
+ const result = await bisect.run({
+ good: shas[0],
+ bad: shas[3],
+ writerId: 'w1',
+ testFn: async () => false, // every state is "bad"
+ });
+
+ expect(result.result).toBe('found');
+ expect(result.firstBadPatch).toBe(shas[1]); // B — first candidate after good
+ } finally {
+ await repo.cleanup();
+ }
+ }, { timeout: 30000 });
+
+ it('vector 8: testFn throws — promise rejects with same error', async () => {
+ const repo = await createGitRepo('bisect-throws');
+ try {
+ const graph = await WarpGraph.open({
+ persistence: repo.persistence,
+ graphName: 'test',
+ writerId: 'w1',
+ autoMaterialize: true,
+ });
+
+ const shas = [];
+ shas.push(await graph.patch(p => { p.addNode('n:1'); }));
+ shas.push(await graph.patch(p => { p.addNode('n:2'); }));
+ shas.push(await graph.patch(p => { p.addNode('n:3'); }));
+
+ const testError = new Error('test function exploded');
+ const bisect = new BisectService({ graph });
+
+ await expect(bisect.run({
+ good: shas[0],
+ bad: shas[2],
+ writerId: 'w1',
+ testFn: async () => { throw testError; },
+ })).rejects.toThrow(testError);
+ } finally {
+ await repo.cleanup();
+ }
+ }, { timeout: 30000 });
+
+ it('vector 9: empty writer chain — range-error', async () => {
+ const repo = await createGitRepo('bisect-empty-writer');
+ try {
+ const graph = await WarpGraph.open({
+ persistence: repo.persistence,
+ graphName: 'test',
+ writerId: 'w1',
+ autoMaterialize: true,
+ });
+
+ // Write patches as w1
+ const sha1 = await graph.patch(p => { p.addNode('n:1'); });
+ const sha2 = await graph.patch(p => { p.addNode('n:2'); });
+
+ const bisect = new BisectService({ graph });
+ // Bisect on w2 who has no patches — SHAs won't be found in w2's chain
+ const result = await bisect.run({
+ good: sha1,
+ bad: sha2,
+ writerId: 'w2',
+ testFn: async () => true,
+ });
+
+ expect(result.result).toBe('range-error');
+ expect(result.message).toBe('good or bad SHA not found in writer chain');
+ } finally {
+ await repo.cleanup();
+ }
+ }, { timeout: 30000 });
+});
diff --git a/test/unit/domain/services/ObserverView.test.js b/test/unit/domain/services/ObserverView.test.js
index b9f789df..f11c804e 100644
--- a/test/unit/domain/services/ObserverView.test.js
+++ b/test/unit/domain/services/ObserverView.test.js
@@ -217,9 +217,9 @@ describe('ObserverView', () => {
});
const props = await view.getNodeProps('user:alice');
- expect(props.get('name')).toBe('Alice');
- expect(props.get('email')).toBe('alice@example.com');
- expect(props.has('ssn')).toBe(false);
+ expect(props.name).toBe('Alice');
+ expect(props.email).toBe('alice@example.com');
+ expect('ssn' in props).toBe(false);
});
it('expose limits to specified properties', async () => {
@@ -236,9 +236,9 @@ describe('ObserverView', () => {
});
const props = await view.getNodeProps('user:alice');
- expect(props.get('name')).toBe('Alice');
- expect(props.get('email')).toBe('alice@example.com');
- expect(props.has('ssn')).toBe(false);
+ expect(props.name).toBe('Alice');
+ expect(props.email).toBe('alice@example.com');
+ expect('ssn' in props).toBe(false);
});
it('redact takes precedence over expose', async () => {
@@ -256,9 +256,9 @@ describe('ObserverView', () => {
});
const props = await view.getNodeProps('user:alice');
- expect(props.get('name')).toBe('Alice');
- expect(props.get('email')).toBe('alice@example.com');
- expect(props.has('ssn')).toBe(false);
+ expect(props.name).toBe('Alice');
+ expect(props.email).toBe('alice@example.com');
+ expect('ssn' in props).toBe(false);
});
it('returns null for non-matching node', async () => {
@@ -283,8 +283,8 @@ describe('ObserverView', () => {
const view = await graph.observer('openView', { match: 'user:*' });
const props = await view.getNodeProps('user:alice');
- expect(props.get('name')).toBe('Alice');
- expect(props.get('ssn')).toBe('123-45-6789');
+ expect(props.name).toBe('Alice');
+ expect(props.ssn).toBe('123-45-6789');
});
});
diff --git a/test/unit/domain/services/TrustPayloadParity.test.js b/test/unit/domain/services/TrustPayloadParity.test.js
new file mode 100644
index 00000000..81600169
--- /dev/null
+++ b/test/unit/domain/services/TrustPayloadParity.test.js
@@ -0,0 +1,541 @@
+/**
+ * @fileoverview CLI trust command ↔ AuditVerifierService.evaluateTrust() parity.
+ *
+ * Verifies that the CLI trust handler output payload shape is a strict
+ * superset of the service-level TrustAssessment. The CLI may add/override
+ * `graph`, `status`, `source`, `sourceDetail` — but must never drop service
+ * fields.
+ */
+
+import { describe, it, expect } from 'vitest';
+import { evaluateWriters } from '../../../../src/domain/trust/TrustEvaluator.js';
+import { buildState } from '../../../../src/domain/trust/TrustStateBuilder.js';
+import { TrustAssessmentSchema } from '../../../../src/domain/trust/schemas.js';
+import {
+ KEY_ADD_1,
+ KEY_ADD_2,
+ WRITER_BIND_ADD_ALICE,
+ KEY_REVOKE_2,
+} from '../trust/fixtures/goldenRecords.js';
+
+// ── Constants ────────────────────────────────────────────────────────────
+
+const ENFORCE_POLICY = Object.freeze({
+ schemaVersion: 1,
+ mode: 'enforce',
+ writerPolicy: 'all_writers_must_be_trusted',
+});
+
+/** Top-level keys the CLI trust handler adds beyond the evaluator output. */
+const CLI_ENVELOPE_KEYS = ['graph'];
+
+/**
+ * All keys that must appear in a CLI trust payload's `trust` object.
+ * Union of evaluator keys + CLI overrides.
+ */
+const REQUIRED_TRUST_KEYS = [
+ 'status',
+ 'source',
+ 'sourceDetail',
+ 'evaluatedWriters',
+ 'untrustedWriters',
+ 'explanations',
+ 'evidenceSummary',
+];
+
+const REQUIRED_EVIDENCE_KEYS = [
+ 'recordsScanned',
+ 'activeKeys',
+ 'revokedKeys',
+ 'activeBindings',
+ 'revokedBindings',
+];
+
+// ── Helpers ──────────────────────────────────────────────────────────────
+
+/**
+ * Simulates the CLI trust handler's payload construction from an
+ * evaluator result, mirroring `handleTrust` in `bin/cli/commands/trust.js`.
+ *
+ * @param {ReturnType} assessment
+ * @param {{ graph: string, status: string, source: string, sourceDetail: string|null }} overrides
+ */
+function buildCliPayload(assessment, overrides) {
+ return {
+ graph: overrides.graph,
+ ...assessment,
+ trust: {
+ ...assessment.trust,
+ status: overrides.status,
+ source: overrides.source,
+ sourceDetail: overrides.sourceDetail,
+ },
+ };
+}
+
+/**
+ * Builds the not_configured CLI payload, mirroring `buildNotConfiguredResult`.
+ * @param {string} graphName
+ */
+function buildNotConfiguredPayload(graphName) {
+ return {
+ graph: graphName,
+ trustSchemaVersion: 1,
+ mode: 'signed_evidence_v1',
+ trustVerdict: 'not_configured',
+ trust: {
+ status: 'not_configured',
+ source: 'none',
+ sourceDetail: null,
+ evaluatedWriters: [],
+ untrustedWriters: [],
+ explanations: [],
+ evidenceSummary: {
+ recordsScanned: 0,
+ activeKeys: 0,
+ revokedKeys: 0,
+ activeBindings: 0,
+ revokedBindings: 0,
+ },
+ },
+ };
+}
+
+/**
+ * Builds the error CLI payload, mirroring the readRecords failure path.
+ * @param {string} graphName
+ * @param {{ source: string, sourceDetail: string|null }} pinInfo
+ */
+function buildErrorPayload(graphName, pinInfo) {
+ return {
+ graph: graphName,
+ trustSchemaVersion: 1,
+ mode: 'signed_evidence_v1',
+ trustVerdict: 'fail',
+ trust: {
+ status: 'error',
+ source: pinInfo.source,
+ sourceDetail: pinInfo.sourceDetail,
+ evaluatedWriters: [],
+ untrustedWriters: [],
+ explanations: [
+ {
+ writerId: '*',
+ trusted: false,
+ reasonCode: 'TRUST_RECORD_CHAIN_INVALID',
+ reason: expect.stringContaining('Trust chain read failed'),
+ },
+ ],
+ evidenceSummary: {
+ recordsScanned: 0,
+ activeKeys: 0,
+ revokedKeys: 0,
+ activeBindings: 0,
+ revokedBindings: 0,
+ },
+ },
+ };
+}
+
+// ============================================================================
+// Shape parity — happy path
+// ============================================================================
+
+describe('TrustPayloadParity — shape parity', () => {
+ it('CLI payload contains all evaluator keys (pass verdict)', () => {
+ const state = buildState([KEY_ADD_1, KEY_ADD_2, WRITER_BIND_ADD_ALICE]);
+ const assessment = evaluateWriters(['alice'], state, ENFORCE_POLICY);
+
+ const cliPayload = buildCliPayload(assessment, {
+ graph: 'test-graph',
+ status: 'configured',
+ source: 'ref',
+ sourceDetail: null,
+ });
+
+ // Top-level: evaluator keys + CLI envelope
+ const assessmentKeys = Object.keys(assessment);
+ for (const key of assessmentKeys) {
+ expect(cliPayload).toHaveProperty(key);
+ }
+ for (const key of CLI_ENVELOPE_KEYS) {
+ expect(cliPayload).toHaveProperty(key);
+ }
+ });
+
+ it('CLI trust object contains all evaluator trust keys', () => {
+ const state = buildState([KEY_ADD_1, KEY_ADD_2, WRITER_BIND_ADD_ALICE]);
+ const assessment = evaluateWriters(['alice'], state, ENFORCE_POLICY);
+
+ const cliPayload = buildCliPayload(assessment, {
+ graph: 'test-graph',
+ status: 'pinned',
+ source: 'cli_pin',
+ sourceDetail: 'abc123',
+ });
+
+ const evaluatorTrustKeys = Object.keys(assessment.trust);
+ for (const key of evaluatorTrustKeys) {
+ expect(cliPayload.trust).toHaveProperty(key);
+ }
+ });
+
+ it('evidenceSummary preserves all five counter fields', () => {
+ const state = buildState([KEY_ADD_1, KEY_ADD_2, WRITER_BIND_ADD_ALICE, KEY_REVOKE_2]);
+ const assessment = evaluateWriters(['alice'], state, ENFORCE_POLICY);
+
+ const cliPayload = buildCliPayload(assessment, {
+ graph: 'g',
+ status: 'configured',
+ source: 'ref',
+ sourceDetail: null,
+ });
+
+ for (const key of REQUIRED_EVIDENCE_KEYS) {
+ expect(cliPayload.trust.evidenceSummary).toHaveProperty(key);
+ expect(typeof cliPayload.trust.evidenceSummary[key]).toBe('number');
+ }
+ });
+
+ it('explanations array entries retain all four fields', () => {
+ const state = buildState([KEY_ADD_1, KEY_ADD_2, WRITER_BIND_ADD_ALICE]);
+ const assessment = evaluateWriters(['alice', 'unknown'], state, ENFORCE_POLICY);
+
+ const cliPayload = buildCliPayload(assessment, {
+ graph: 'g',
+ status: 'configured',
+ source: 'ref',
+ sourceDetail: null,
+ });
+
+ expect(cliPayload.trust.explanations.length).toBeGreaterThan(0);
+ for (const explanation of cliPayload.trust.explanations) {
+ expect(explanation).toHaveProperty('writerId');
+ expect(explanation).toHaveProperty('trusted');
+ expect(explanation).toHaveProperty('reasonCode');
+ expect(explanation).toHaveProperty('reason');
+ expect(typeof explanation.writerId).toBe('string');
+ expect(typeof explanation.trusted).toBe('boolean');
+ expect(typeof explanation.reasonCode).toBe('string');
+ expect(typeof explanation.reason).toBe('string');
+ }
+ });
+
+ it('CLI payload passes TrustAssessmentSchema after stripping CLI-only keys', () => {
+ const state = buildState([KEY_ADD_1, KEY_ADD_2, WRITER_BIND_ADD_ALICE]);
+ const assessment = evaluateWriters(['alice'], state, ENFORCE_POLICY);
+
+ const cliPayload = buildCliPayload(assessment, {
+ graph: 'test-graph',
+ status: 'configured',
+ source: 'ref',
+ sourceDetail: null,
+ });
+
+ // Strip CLI-only envelope keys for schema validation
+ const { graph: _graph, ...assessmentPortion } = cliPayload;
+ const result = TrustAssessmentSchema.safeParse(assessmentPortion);
+ expect(result.success).toBe(true);
+ });
+});
+
+// ============================================================================
+// CLI overrides — status, source, sourceDetail
+// ============================================================================
+
+describe('TrustPayloadParity — CLI overrides', () => {
+ it('CLI pin overrides evaluator defaults (source=cli_pin, status=pinned)', () => {
+ const state = buildState([KEY_ADD_1, KEY_ADD_2, WRITER_BIND_ADD_ALICE]);
+ const assessment = evaluateWriters(['alice'], state, ENFORCE_POLICY);
+
+ // Evaluator defaults: status='configured', source='ref'
+ expect(assessment.trust.status).toBe('configured');
+ expect(assessment.trust.source).toBe('ref');
+
+ const cliPayload = buildCliPayload(assessment, {
+ graph: 'g',
+ status: 'pinned',
+ source: 'cli_pin',
+ sourceDetail: 'abc123def',
+ });
+
+ expect(cliPayload.trust.status).toBe('pinned');
+ expect(cliPayload.trust.source).toBe('cli_pin');
+ expect(cliPayload.trust.sourceDetail).toBe('abc123def');
+ });
+
+ it('env pin overrides evaluator defaults (source=env_pin, status=pinned)', () => {
+ const state = buildState([KEY_ADD_1, KEY_ADD_2, WRITER_BIND_ADD_ALICE]);
+ const assessment = evaluateWriters(['alice'], state, ENFORCE_POLICY);
+
+ const cliPayload = buildCliPayload(assessment, {
+ graph: 'g',
+ status: 'pinned',
+ source: 'env_pin',
+ sourceDetail: 'deadbeef',
+ });
+
+ expect(cliPayload.trust.status).toBe('pinned');
+ expect(cliPayload.trust.source).toBe('env_pin');
+ expect(cliPayload.trust.sourceDetail).toBe('deadbeef');
+ });
+
+ it('ref resolution uses evaluator defaults (source=ref, status=configured)', () => {
+ const state = buildState([KEY_ADD_1, KEY_ADD_2, WRITER_BIND_ADD_ALICE]);
+ const assessment = evaluateWriters(['alice'], state, ENFORCE_POLICY);
+
+ const cliPayload = buildCliPayload(assessment, {
+ graph: 'g',
+ status: 'configured',
+ source: 'ref',
+ sourceDetail: null,
+ });
+
+ expect(cliPayload.trust.status).toBe('configured');
+ expect(cliPayload.trust.source).toBe('ref');
+ expect(cliPayload.trust.sourceDetail).toBeNull();
+ });
+
+ it('override does not discard non-overridden trust fields', () => {
+ const state = buildState([KEY_ADD_1, KEY_ADD_2, WRITER_BIND_ADD_ALICE]);
+ const assessment = evaluateWriters(['alice', 'unknown'], state, ENFORCE_POLICY);
+
+ const cliPayload = buildCliPayload(assessment, {
+ graph: 'g',
+ status: 'pinned',
+ source: 'cli_pin',
+ sourceDetail: 'abc',
+ });
+
+ // These must survive the spread override
+ expect(cliPayload.trust.evaluatedWriters).toEqual(assessment.trust.evaluatedWriters);
+ expect(cliPayload.trust.untrustedWriters).toEqual(assessment.trust.untrustedWriters);
+ expect(cliPayload.trust.explanations).toEqual(assessment.trust.explanations);
+ expect(cliPayload.trust.evidenceSummary).toEqual(assessment.trust.evidenceSummary);
+ });
+});
+
+// ============================================================================
+// Error path parity
+// ============================================================================
+
+describe('TrustPayloadParity — error path', () => {
+ it('CLI error payload has same trust keys as service error payload', () => {
+ const cliErrorPayload = buildErrorPayload('g', { source: 'ref', sourceDetail: null });
+
+ // Service error path (from AuditVerifierService.evaluateTrust)
+ const serviceErrorPayload = {
+ trustSchemaVersion: 1,
+ mode: 'signed_evidence_v1',
+ trustVerdict: 'fail',
+ trust: {
+ status: 'error',
+ source: 'ref',
+ sourceDetail: null,
+ evaluatedWriters: [],
+ untrustedWriters: [],
+ explanations: [
+ {
+ writerId: '*',
+ trusted: false,
+ reasonCode: 'TRUST_RECORD_CHAIN_INVALID',
+ reason: 'Trust chain read failed: some error',
+ },
+ ],
+ evidenceSummary: {
+ recordsScanned: 0,
+ activeKeys: 0,
+ revokedKeys: 0,
+ activeBindings: 0,
+ revokedBindings: 0,
+ },
+ },
+ };
+
+ // Both should have identical trust-level keys
+ const cliTrustKeys = Object.keys(cliErrorPayload.trust).sort();
+ const serviceTrustKeys = Object.keys(serviceErrorPayload.trust).sort();
+ expect(cliTrustKeys).toEqual(serviceTrustKeys);
+ });
+
+ it('error payload explanation uses TRUST_RECORD_CHAIN_INVALID reason code', () => {
+ const payload = buildErrorPayload('g', { source: 'cli_pin', sourceDetail: 'bad-sha' });
+
+ expect(payload.trust.explanations).toHaveLength(1);
+ expect(payload.trust.explanations[0].reasonCode).toBe('TRUST_RECORD_CHAIN_INVALID');
+ expect(payload.trust.explanations[0].writerId).toBe('*');
+ expect(payload.trust.explanations[0].trusted).toBe(false);
+ });
+
+ it('error payload evidenceSummary has all zero counters', () => {
+ const payload = buildErrorPayload('g', { source: 'ref', sourceDetail: null });
+ const summary = /** @type {Record} */ (payload.trust.evidenceSummary);
+ for (const key of REQUIRED_EVIDENCE_KEYS) {
+ expect(summary[key]).toBe(0);
+ }
+ });
+
+ it('CLI error preserves pin source information', () => {
+ const pinned = buildErrorPayload('g', { source: 'cli_pin', sourceDetail: 'deadbeef' });
+ expect(pinned.trust.source).toBe('cli_pin');
+ expect(pinned.trust.sourceDetail).toBe('deadbeef');
+
+ const envPinned = buildErrorPayload('g', { source: 'env_pin', sourceDetail: 'cafebabe' });
+ expect(envPinned.trust.source).toBe('env_pin');
+ expect(envPinned.trust.sourceDetail).toBe('cafebabe');
+
+ const refBased = buildErrorPayload('g', { source: 'ref', sourceDetail: null });
+ expect(refBased.trust.source).toBe('ref');
+ expect(refBased.trust.sourceDetail).toBeNull();
+ });
+});
+
+// ============================================================================
+// Not-configured path parity
+// ============================================================================
+
+describe('TrustPayloadParity — not-configured path', () => {
+ it('not_configured payload has same trust keys as service not_configured result', () => {
+ const cliPayload = buildNotConfiguredPayload('g');
+
+ // Service not_configured path (from AuditVerifierService.evaluateTrust)
+ const servicePayload = {
+ trustSchemaVersion: 1,
+ mode: 'signed_evidence_v1',
+ trustVerdict: 'not_configured',
+ trust: {
+ status: 'not_configured',
+ source: 'none',
+ sourceDetail: null,
+ evaluatedWriters: [],
+ untrustedWriters: [],
+ explanations: [],
+ evidenceSummary: {
+ recordsScanned: 0,
+ activeKeys: 0,
+ revokedKeys: 0,
+ activeBindings: 0,
+ revokedBindings: 0,
+ },
+ },
+ };
+
+ const cliTrustKeys = Object.keys(cliPayload.trust).sort();
+ const serviceTrustKeys = Object.keys(servicePayload.trust).sort();
+ expect(cliTrustKeys).toEqual(serviceTrustKeys);
+ });
+
+ it('not_configured sets trustVerdict to not_configured', () => {
+ const payload = buildNotConfiguredPayload('test-graph');
+ expect(payload.trustVerdict).toBe('not_configured');
+ });
+
+ it('not_configured sets status to not_configured and source to none', () => {
+ const payload = buildNotConfiguredPayload('g');
+ expect(payload.trust.status).toBe('not_configured');
+ expect(payload.trust.source).toBe('none');
+ expect(payload.trust.sourceDetail).toBeNull();
+ });
+
+ it('not_configured has empty writer and explanation arrays', () => {
+ const payload = buildNotConfiguredPayload('g');
+ expect(payload.trust.evaluatedWriters).toEqual([]);
+ expect(payload.trust.untrustedWriters).toEqual([]);
+ expect(payload.trust.explanations).toEqual([]);
+ });
+
+ it('not_configured evidenceSummary has all zero counters', () => {
+ const payload = buildNotConfiguredPayload('g');
+ const summary = /** @type {Record} */ (payload.trust.evidenceSummary);
+ for (const key of REQUIRED_EVIDENCE_KEYS) {
+ expect(summary[key]).toBe(0);
+ }
+ });
+});
+
+// ============================================================================
+// Structural invariants across all paths
+// ============================================================================
+
+describe('TrustPayloadParity — structural invariants', () => {
+ it('all paths produce the same set of trust keys', () => {
+ const state = buildState([KEY_ADD_1, KEY_ADD_2, WRITER_BIND_ADD_ALICE]);
+ const assessment = evaluateWriters(['alice'], state, ENFORCE_POLICY);
+
+ const happyPayload = buildCliPayload(assessment, {
+ graph: 'g',
+ status: 'configured',
+ source: 'ref',
+ sourceDetail: null,
+ });
+ const errorPayload = buildErrorPayload('g', { source: 'ref', sourceDetail: null });
+ const notConfiguredPayload = buildNotConfiguredPayload('g');
+
+ const happyKeys = Object.keys(happyPayload.trust).sort();
+ const errorKeys = Object.keys(errorPayload.trust).sort();
+ const notConfiguredKeys = Object.keys(notConfiguredPayload.trust).sort();
+
+ expect(happyKeys).toEqual(REQUIRED_TRUST_KEYS.slice().sort());
+ expect(errorKeys).toEqual(happyKeys);
+ expect(notConfiguredKeys).toEqual(happyKeys);
+ });
+
+ it('all paths produce the same set of top-level keys', () => {
+ const state = buildState([KEY_ADD_1, KEY_ADD_2, WRITER_BIND_ADD_ALICE]);
+ const assessment = evaluateWriters(['alice'], state, ENFORCE_POLICY);
+
+ const happyPayload = buildCliPayload(assessment, {
+ graph: 'g',
+ status: 'configured',
+ source: 'ref',
+ sourceDetail: null,
+ });
+ const errorPayload = buildErrorPayload('g', { source: 'ref', sourceDetail: null });
+ const notConfiguredPayload = buildNotConfiguredPayload('g');
+
+ const expectedTopKeys = ['graph', 'trustSchemaVersion', 'mode', 'trustVerdict', 'trust'].sort();
+
+ expect(Object.keys(happyPayload).sort()).toEqual(expectedTopKeys);
+ expect(Object.keys(errorPayload).sort()).toEqual(expectedTopKeys);
+ expect(Object.keys(notConfiguredPayload).sort()).toEqual(expectedTopKeys);
+ });
+
+ it('evidenceSummary key set is identical across all paths', () => {
+ const state = buildState([KEY_ADD_1, KEY_ADD_2, WRITER_BIND_ADD_ALICE]);
+ const assessment = evaluateWriters(['alice'], state, ENFORCE_POLICY);
+
+ const happyPayload = buildCliPayload(assessment, {
+ graph: 'g',
+ status: 'configured',
+ source: 'ref',
+ sourceDetail: null,
+ });
+ const errorPayload = buildErrorPayload('g', { source: 'ref', sourceDetail: null });
+ const notConfiguredPayload = buildNotConfiguredPayload('g');
+
+ const happyEvidenceKeys = Object.keys(happyPayload.trust.evidenceSummary).sort();
+ const errorEvidenceKeys = Object.keys(errorPayload.trust.evidenceSummary).sort();
+ const notConfiguredEvidenceKeys = Object.keys(notConfiguredPayload.trust.evidenceSummary).sort();
+
+ expect(happyEvidenceKeys).toEqual(REQUIRED_EVIDENCE_KEYS.slice().sort());
+ expect(errorEvidenceKeys).toEqual(happyEvidenceKeys);
+ expect(notConfiguredEvidenceKeys).toEqual(happyEvidenceKeys);
+ });
+
+ it('CLI payload with fail verdict still has complete shape', () => {
+ const state = buildState([KEY_ADD_1]);
+ const assessment = evaluateWriters(['unknown-writer'], state, ENFORCE_POLICY);
+ expect(assessment.trustVerdict).toBe('fail');
+
+ const cliPayload = buildCliPayload(assessment, {
+ graph: 'g',
+ status: 'configured',
+ source: 'ref',
+ sourceDetail: null,
+ });
+
+ for (const key of REQUIRED_TRUST_KEYS) {
+ expect(cliPayload.trust).toHaveProperty(key);
+ }
+ });
+});
diff --git a/test/unit/domain/utils/CachedValue.test.js b/test/unit/domain/utils/CachedValue.test.js
index c6505221..04c31ce6 100644
--- a/test/unit/domain/utils/CachedValue.test.js
+++ b/test/unit/domain/utils/CachedValue.test.js
@@ -411,4 +411,50 @@ describe('CachedValue', () => {
expect(value.array).toEqual([1, 2, 3]);
});
});
+
+ // -----------------------------------------------------------------------
+ // Null-payload semantics
+ //
+ // Returning `null` from compute means "no value available." This is an
+ // intentional design contract: null is the sentinel that _isValid() checks,
+ // so a null result is never cached. Every subsequent get() recomputes, and
+ // the cache reports itself as empty. This prevents stale "absence" from
+ // being treated as a valid cached answer.
+ // -----------------------------------------------------------------------
+ describe('null-payload semantics', () => {
+ it('null return triggers recomputation on every get()', async () => {
+ const clock = createMockClock();
+ const compute = vi.fn().mockResolvedValue(null);
+ const cache = new CachedValue({ clock, ttlMs: 5000, compute });
+
+ const first = await cache.get();
+ const second = await cache.get();
+
+ expect(first).toBeNull();
+ expect(second).toBeNull();
+ expect(compute).toHaveBeenCalledTimes(2);
+ });
+
+ it('getWithMetadata returns fromCache=false for null', async () => {
+ const clock = createMockClock();
+ const compute = vi.fn().mockResolvedValue(null);
+ const cache = new CachedValue({ clock, ttlMs: 5000, compute });
+
+ await cache.get();
+ const result = await cache.getWithMetadata();
+
+ expect(result.value).toBeNull();
+ expect(result.fromCache).toBe(false);
+ });
+
+ it('hasValue returns false when compute returned null', async () => {
+ const clock = createMockClock();
+ const compute = vi.fn().mockResolvedValue(null);
+ const cache = new CachedValue({ clock, ttlMs: 5000, compute });
+
+ await cache.get();
+
+ expect(cache.hasValue).toBe(false);
+ });
+ });
});