feat(execution): add Execution Time Travel engine#570
feat(execution): add Execution Time Travel engine#570nikolasdehor wants to merge 2 commits intoSynkraAI:mainfrom
Conversation
|
@nikolasdehor is attempting to deploy a commit to the Pedro Valério Lopez's projects Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughAdds a new TimeTravelEngine implementing timeline-based execution histories with checkpointing, forking, rewinding, replay-plan construction, on-disk JSON persistence, event emissions, status enums, and a comprehensive test suite. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Engine as TimeTravelEngine
participant Memory
participant Disk
Client->>Engine: checkpoint(timelineId, state, label)
activate Engine
Engine->>Memory: deepClone(state) & append checkpoint
Engine->>Disk: _persist(timeline) [if autoPersist]
Engine-->>Client: emit checkpoint:created
deactivate Engine
Client->>Engine: restoreCheckpoint(timelineId, checkpointId)
activate Engine
Engine->>Memory: _getCheckpoint(...) & deepClone(state)
Engine->>Memory: increment restores
Engine-->>Client: emit checkpoint:restored + state
deactivate Engine
sequenceDiagram
participant Client
participant Engine as TimeTravelEngine
participant Memory
participant Disk
Client->>Engine: fork(timelineId, checkpointId, metadata)
activate Engine
Engine->>Memory: _getTimeline(...) and _getCheckpoint(...)
Engine->>Memory: create fork timeline, clone checkpoints up to fork
Engine->>Memory: link parent <-> fork
Engine->>Disk: _persist(parent) & _persist(fork) [if autoPersist]
Engine-->>Client: emit timeline:forked
deactivate Engine
sequenceDiagram
participant Client
participant Engine as TimeTravelEngine
participant Memory
participant Disk
Client->>Engine: rewind(timelineId, checkpointId)
activate Engine
Engine->>Memory: mark subsequent checkpoints REWOUND
Engine->>Memory: increment rewinds
Engine->>Disk: _persist(timeline) [if autoPersist]
Engine-->>Client: emit timeline:rewound
Client->>Engine: getReplayPlan(timelineId, fromId, toId)
activate Engine
Engine->>Memory: collect intermediate checkpoints, validate order
Engine-->>Client: return replay plan
deactivate Engine
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (1)
tests/core/execution/time-travel.test.js (1)
6-10: Please switch this test to the project’s absolute import form.The deep relative path couples the test to directory depth and bypasses the repo-wide import convention.
As per coding guidelines,
**/*.{js,jsx,ts,tsx}: Use absolute imports instead of relative imports in all code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/execution/time-travel.test.js` around lines 6 - 10, Replace the deep relative require for TimeTravelEngine/TimelineStatus/CheckpointStatus with the project's absolute import form: change require('../../../.aiox-core/core/execution/time-travel') to the repo's absolute path (for example require('@aiox-core/core/execution/time-travel') or the configured alias used across the project) so the test imports TimeTravelEngine, TimelineStatus, and CheckpointStatus via the project-wide absolute import convention.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.aiox-core/core/execution/time-travel.js:
- Around line 502-506: Public timeline operations (restoreCheckpoint, fork,
rewind, deleteTimeline, getReplayPlan, getTimelineTree) only work if
listTimelines() was called because disk hydration is done there; fix by
centralizing the load check into a shared helper (e.g., ensureTimelinesLoaded)
that calls this._loadFromDisk() when this.timelines.size === 0, and call that
helper at the start of each public method (restoreCheckpoint, fork, rewind,
deleteTimeline, getReplayPlan, getTimelineTree) or alternatively expose an
explicit init method that callers can invoke; update those methods to call
ensureTimelinesLoaded() (or require init) before accessing this.timelines so
persisted timelines are available after restart.
- Around line 398-417: The current loop in time-travel.js incorrectly detects
shared ancestors by comparing JSON.stringify(state); change it to compare
checkpoint identity/lineage instead — e.g., use a stable unique field like
checkpoints[i].id or checkpoints[i].parentId (or walk parent lineage) to
determine shared checkpoints in compareTimelines(), rather than serializing
state. Update the sharedCheckpoints push to rely on the checkpoint
identity/lineage (still preserve label and deepClone(state) if needed) and set
commonAncestorIndex based on matched checkpoint IDs/parent links so fork
analysis uses explicit lineage, not state equality.
- Around line 54-56: The deepClone function currently uses
JSON.parse(JSON.stringify(value)) which is lossy for Dates, Maps, Sets, BigInt,
undefined, etc.; update the deepClone implementation in time-travel.js to use
structuredClone(value) when available (Node 17+) and fall back to a validation
path that explicitly detects and rejects unsupported types (Date, BigInt, Map,
Set, Function, Symbol, undefined in objects) with a clear error, or
alternatively implement a robust clone for those supported types; ensure you
modify the deepClone function to first check for globalThis.structuredClone and
otherwise run the validation/reject routine so the replay engine either
round-trips state or fails fast with a descriptive message.
- Around line 155-163: The check only counts ACTIVE checkpoints so rewinds + new
checkpoints let timeline.checkpoints grow unbounded; change the cap check to
consider the total number of checkpoints instead of only ACTIVE ones (i.e., use
timeline.checkpoints.length or count all statuses) when comparing against
this.maxCheckpointsPerTimeline, updating the code that references
CheckpointStatus.ACTIVE and activeCount to use the total checkpoint count (still
throwing the same Error when the total meets/exceeds
this.maxCheckpointsPerTimeline).
- Around line 647-652: The emit('error') call in the _persist() catch block can
throw if no 'error' listeners exist; wrap the emit with a guard using
this.listenerCount('error') > 0 before calling this.emit(...) so
createTimeline() and checkpoint() (which await _persist()) won't be rejected by
an unhandled error event; leave the payload (operation, timelineId, error:
error.message) unchanged and follow the same pattern used in
dashboard-integration.js.
In `@tests/core/execution/time-travel.test.js`:
- Line 725: The forEach callback currently uses an expression-bodied arrow
(list.forEach((tl) => expect(tl.sessionId).toBe('s1'))) which returns the
assertion value and triggers useIterableCallbackReturn; change the callback to a
block-bodied arrow so it does not return a value (i.e., use list.forEach((tl) =>
{ expect(tl.sessionId).toBe('s1'); }) ), keeping the same symbols (list,
forEach, tl, sessionId) and the same assertion.
---
Nitpick comments:
In `@tests/core/execution/time-travel.test.js`:
- Around line 6-10: Replace the deep relative require for
TimeTravelEngine/TimelineStatus/CheckpointStatus with the project's absolute
import form: change require('../../../.aiox-core/core/execution/time-travel') to
the repo's absolute path (for example
require('@aiox-core/core/execution/time-travel') or the configured alias used
across the project) so the test imports TimeTravelEngine, TimelineStatus, and
CheckpointStatus via the project-wide absolute import convention.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: bef2205d-ff12-48c5-8b43-a9c09d1f60b7
📒 Files selected for processing (4)
.aios-core/core/execution/time-travel.js.aiox-core/core/execution/time-travel.js.aiox-core/install-manifest.yamltests/core/execution/time-travel.test.js
| async listTimelines(filter = {}) { | ||
| // If no timelines loaded, try to load from disk | ||
| if (this.timelines.size === 0) { | ||
| await this._loadFromDisk(); | ||
| } |
There was a problem hiding this comment.
Persisted timelines are only hydrated through listTimelines().
After a restart, restoreCheckpoint(), fork(), rewind(), deleteTimeline(), getReplayPlan(), and getTimelineTree() still fail with "Timeline not found" unless the caller happened to call listTimelines() first. The load path needs to be shared across the rest of the public API or exposed as an explicit initialization step.
As per coding guidelines, .aiox-core/core/**: Ensure backwards compatibility — core modules are consumed by all agents.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.aiox-core/core/execution/time-travel.js around lines 502 - 506, Public
timeline operations (restoreCheckpoint, fork, rewind, deleteTimeline,
getReplayPlan, getTimelineTree) only work if listTimelines() was called because
disk hydration is done there; fix by centralizing the load check into a shared
helper (e.g., ensureTimelinesLoaded) that calls this._loadFromDisk() when
this.timelines.size === 0, and call that helper at the start of each public
method (restoreCheckpoint, fork, rewind, deleteTimeline, getReplayPlan,
getTimelineTree) or alternatively expose an explicit init method that callers
can invoke; update those methods to call ensureTimelinesLoaded() (or require
init) before accessing this.timelines so persisted timelines are available after
restart.
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (1)
.aiox-core/core/execution/time-travel.js (1)
526-530:⚠️ Potential issue | 🟠 MajorPersisted timelines still aren’t loaded consistently across the public API.
listTimelines()only hydrates whenthis.timelines.size === 0, and_getTimeline()never hydrates at all. That means an engine created withautoPersist: falseagainst an existing storage dir still fails inrestoreCheckpoint(),fork(),rewind(),getReplayPlan(),compareTimelines(), andgetTimelineTree()untillistTimelines()happens to run first. It also meanslistTimelines()can skip disk entirely once a single in-memory timeline exists.A small fix is to key loading off
!this._loadedand centralize it on the lookup path:Minimal fix
async listTimelines(filter = {}) { - if (this.timelines.size === 0) { + if (!this._loaded) { await this._loadFromDisk(); }_getTimeline(timelineId) { + if (!this._loaded) { + this._loadFromDiskSync(); + } const timeline = this.timelines.get(timelineId); if (!timeline) { throw new Error(`Timeline not found: ${timelineId}`); }As per coding guidelines,
.aiox-core/core/**: Ensure backwards compatibility — core modules are consumed by all agents.Also applies to: 636-642
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.aiox-core/core/execution/time-travel.js around lines 526 - 530, The current hydration logic only loads persisted timelines when this.timelines.size === 0 inside listTimelines and never in _getTimeline, causing many public API methods to operate on stale memory; change the condition to use a unified loaded flag: ensure _loadFromDisk() is invoked when !this._loaded (set true after successful load) and move that check into the central lookup path used by _getTimeline (and called by listTimelines, restoreCheckpoint, fork, rewind, getReplayPlan, compareTimelines, getTimelineTree) so any method that needs timelines first calls the common ensureLoaded() behavior that checks this._loaded and calls _loadFromDisk() if necessary.
🧹 Nitpick comments (1)
tests/core/execution/time-travel.test.js (1)
6-10: Use the repository’s absolute import convention here.This relative path traversal is the only way this suite can find the module right now, but it still violates the project import rule and makes the test more brittle than the public entrypoint/absolute path would be.
As per coding guidelines,
**/*.{js,jsx,ts,tsx}: Use absolute imports instead of relative imports in all code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/execution/time-travel.test.js` around lines 6 - 10, The test imports TimeTravelEngine, TimelineStatus, and CheckpointStatus via a relative deep path; change the require to use the repository's absolute/public entrypoint import convention (importing the same symbols TimeTravelEngine, TimelineStatus, CheckpointStatus from the package's absolute path used across the codebase) so the test relies on the public module resolution rather than '../../../.aiox-core/core/execution/time-travel' and follows the project's absolute-import rule.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.aios-core/core/execution/time-travel.js:
- Around line 1-742: This file contains an orphaned duplicate of the
TimeTravelEngine module (symbols: TimeTravelEngine, TimelineStatus,
CheckpointStatus, generateId, deepClone,
_persist/_loadFromDisk/_loadFromDiskSync) and should be removed to avoid
divergence from the canonical implementation; delete this duplicate file, ensure
only the canonical module remains in the repo, and scan/update any
imports/usages to reference the canonical module path so there are no remaining
references to the removed copy.
In @.aiox-core/core/execution/time-travel.js:
- Around line 407-441: The current compareTimelines() only checks a single-level
parent/child and misses sibling or multi-level fork ancestry; update the logic
that inspects tl1/tl2 (and their parentId chains) to walk each timeline’s
parentId chain upward until you find a common ancestor timeline id, then use the
matching timeline’s parentCheckpointId at the fork edge to locate the fork
checkpoint index in the ancestor’s checkpoints and populate sharedCheckpoints
(and set commonAncestorIndex and forkPointIndex) from that ancestor’s checkpoint
prefix (using deepClone for state as now); ensure you reference and update the
same symbols (compareTimelines, tl1, tl2, parentId, parentCheckpointId,
sharedCheckpoints, commonAncestorIndex, forkPointIndex) and add regression tests
exercising sibling forks and grandchild (fork-of-fork) scenarios so the new path
is covered.
In @.aiox-core/install-manifest.yaml:
- Around line 835-838: The manifest includes entries for
cognitive-load-balancer.js and swarm-intelligence.js that do not exist in the
repo; either add the missing modules (create the corresponding
cognitive-load-balancer.js and swarm-intelligence.js source files and update the
manifest with their correct sha256 hashes and sizes) or remove/update the two
manifest entries so they no longer reference those non-existent files; ensure
the manifest's hash/size fields match the added files if you choose to add them,
or delete the entries entirely if those modules are not part of this release.
In `@tests/core/execution/time-travel.test.js`:
- Around line 29-35: The mock implementation for readdir (the jest.fn() mock
that iterates store.keys() and splits each key with '/' to extract the filename)
is POSIX-only and fails on Windows; replace the manual split and
parts[parts.length - 1] logic with path.basename(...) to correctly get the
filename across platforms, and apply the same change to the other similar mock
implementation used later in the file that also splits with '/'.
---
Duplicate comments:
In @.aiox-core/core/execution/time-travel.js:
- Around line 526-530: The current hydration logic only loads persisted
timelines when this.timelines.size === 0 inside listTimelines and never in
_getTimeline, causing many public API methods to operate on stale memory; change
the condition to use a unified loaded flag: ensure _loadFromDisk() is invoked
when !this._loaded (set true after successful load) and move that check into the
central lookup path used by _getTimeline (and called by listTimelines,
restoreCheckpoint, fork, rewind, getReplayPlan, compareTimelines,
getTimelineTree) so any method that needs timelines first calls the common
ensureLoaded() behavior that checks this._loaded and calls _loadFromDisk() if
necessary.
---
Nitpick comments:
In `@tests/core/execution/time-travel.test.js`:
- Around line 6-10: The test imports TimeTravelEngine, TimelineStatus, and
CheckpointStatus via a relative deep path; change the require to use the
repository's absolute/public entrypoint import convention (importing the same
symbols TimeTravelEngine, TimelineStatus, CheckpointStatus from the package's
absolute path used across the codebase) so the test relies on the public module
resolution rather than '../../../.aiox-core/core/execution/time-travel' and
follows the project's absolute-import rule.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 17bd7afa-aa32-4a6d-86dc-cd6d8b84064c
📒 Files selected for processing (4)
.aios-core/core/execution/time-travel.js.aiox-core/core/execution/time-travel.js.aiox-core/install-manifest.yamltests/core/execution/time-travel.test.js
| /** | ||
| * Execution Time Travel | ||
| * Story EXE-4 - Checkpoint, replay, fork and rewind agent executions | ||
| * | ||
| * Provides "git for agent sessions" — timeline-based checkpoint, replay, | ||
| * fork and rewind capabilities for agent execution state. | ||
| * | ||
| * @module aiox-core/execution/time-travel | ||
| * @version 1.0.0 | ||
| * @story EXE-4 - Execution Time Travel | ||
| */ | ||
|
|
||
| const EventEmitter = require('events'); | ||
| const fs = require('fs'); | ||
| const path = require('path'); | ||
|
|
||
| /** | ||
| * Timeline statuses | ||
| * @enum {string} | ||
| */ | ||
| const TimelineStatus = { | ||
| ACTIVE: 'active', | ||
| ARCHIVED: 'archived', | ||
| }; | ||
|
|
||
| /** | ||
| * Checkpoint statuses | ||
| * @enum {string} | ||
| */ | ||
| const CheckpointStatus = { | ||
| ACTIVE: 'active', | ||
| REWOUND: 'rewound', | ||
| }; | ||
|
|
||
| let idCounter = 0; | ||
|
|
||
| /** | ||
| * Generate a unique ID with prefix | ||
| * @param {string} prefix - ID prefix | ||
| * @returns {string} Unique ID | ||
| */ | ||
| function generateId(prefix) { | ||
| idCounter += 1; | ||
| const timestamp = Date.now().toString(36); | ||
| const random = Math.random().toString(36).substring(2, 8); | ||
| return `${prefix}_${timestamp}${random}${idCounter}`; | ||
| } | ||
|
|
||
| /** | ||
| * Deep clone a value using structured clone or JSON fallback | ||
| * @param {*} value - Value to clone | ||
| * @returns {*} Deep cloned value | ||
| */ | ||
| function deepClone(value) { | ||
| if (value === null || value === undefined) return value; | ||
| if (typeof structuredClone === 'function') return structuredClone(value); | ||
| return JSON.parse(JSON.stringify(value)); | ||
| } | ||
|
|
||
| /** | ||
| * Execution Time Travel Engine | ||
| * | ||
| * Manages timeline-based execution history with checkpoint, replay, | ||
| * fork and rewind capabilities. Each agent session can have multiple | ||
| * timelines, and timelines can be forked at any checkpoint. | ||
| * | ||
| * @class TimeTravelEngine | ||
| * @extends EventEmitter | ||
| */ | ||
| class TimeTravelEngine extends EventEmitter { | ||
| /** | ||
| * Create a new TimeTravelEngine | ||
| * @param {Object} [config={}] - Configuration options | ||
| * @param {string} [config.storageDir='.aiox/timelines'] - Directory for timeline persistence | ||
| * @param {number} [config.maxCheckpointsPerTimeline=500] - Max checkpoints per timeline | ||
| * @param {boolean} [config.autoPersist=true] - Auto-persist timelines to disk | ||
| */ | ||
| constructor(config = {}) { | ||
| super(); | ||
|
|
||
| this.storageDir = config.storageDir ?? '.aiox/timelines'; | ||
| this.maxCheckpointsPerTimeline = config.maxCheckpointsPerTimeline ?? 500; | ||
| this.autoPersist = config.autoPersist ?? true; | ||
|
|
||
| /** @type {Map<string, Object>} */ | ||
| this.timelines = new Map(); | ||
|
|
||
| this._loaded = false; | ||
|
|
||
| this._stats = { | ||
| timelinesCreated: 0, | ||
| checkpointsCreated: 0, | ||
| forksCreated: 0, | ||
| rewindsPerformed: 0, | ||
| restoresPerformed: 0, | ||
| }; | ||
|
|
||
| // Sync load persisted timelines on startup | ||
| if (this.autoPersist) { | ||
| this._loadFromDiskSync(); | ||
| } | ||
| } | ||
|
|
||
| // --------------------------------------------------------------------------- | ||
| // createTimeline | ||
| // --------------------------------------------------------------------------- | ||
|
|
||
| /** | ||
| * Create a new timeline for a session | ||
| * @param {string} sessionId - Session identifier | ||
| * @param {Object} [metadata={}] - Additional metadata | ||
| * @returns {Promise<Object>} Created timeline | ||
| */ | ||
| async createTimeline(sessionId, metadata = {}) { | ||
| if (!sessionId) { | ||
| throw new Error('sessionId is required'); | ||
| } | ||
|
|
||
| const id = generateId('tl'); | ||
| const now = new Date().toISOString(); | ||
|
|
||
| const timeline = { | ||
| id, | ||
| sessionId, | ||
| parentId: null, | ||
| parentCheckpointId: null, | ||
| metadata: deepClone(metadata), | ||
| checkpoints: [], | ||
| forks: [], | ||
| createdAt: now, | ||
| updatedAt: now, | ||
| status: TimelineStatus.ACTIVE, | ||
| }; | ||
|
|
||
| this.timelines.set(id, timeline); | ||
| this._stats.timelinesCreated += 1; | ||
|
|
||
| this.emit('timeline:created', { | ||
| timelineId: id, | ||
| sessionId, | ||
| metadata: deepClone(metadata), | ||
| }); | ||
|
|
||
| await this._persist(timeline); | ||
|
|
||
| return deepClone(timeline); | ||
| } | ||
|
|
||
| // --------------------------------------------------------------------------- | ||
| // checkpoint | ||
| // --------------------------------------------------------------------------- | ||
|
|
||
| /** | ||
| * Save a checkpoint on a timeline | ||
| * @param {string} timelineId - Timeline ID | ||
| * @param {*} state - State to checkpoint (will be deep-cloned) | ||
| * @param {string} [label=''] - Human-readable label | ||
| * @returns {Promise<Object>} Created checkpoint | ||
| */ | ||
| async checkpoint(timelineId, state, label = '') { | ||
| const timeline = this._getTimeline(timelineId); | ||
|
|
||
| if (timeline.checkpoints.length >= this.maxCheckpointsPerTimeline) { | ||
| throw new Error( | ||
| `Timeline has reached maximum of ${this.maxCheckpointsPerTimeline} checkpoints` | ||
| ); | ||
| } | ||
|
|
||
| const id = generateId('cp'); | ||
| const checkpoint = { | ||
| id, | ||
| timelineId, | ||
| state: deepClone(state), | ||
| label: label ?? '', | ||
| timestamp: new Date().toISOString(), | ||
| index: timeline.checkpoints.length, | ||
| status: CheckpointStatus.ACTIVE, | ||
| }; | ||
|
|
||
| timeline.checkpoints.push(checkpoint); | ||
| timeline.updatedAt = new Date().toISOString(); | ||
| this._stats.checkpointsCreated += 1; | ||
|
|
||
| this.emit('checkpoint:created', { | ||
| timelineId, | ||
| checkpointId: id, | ||
| label: checkpoint.label, | ||
| index: checkpoint.index, | ||
| }); | ||
|
|
||
| await this._persist(timeline); | ||
|
|
||
| return deepClone(checkpoint); | ||
| } | ||
|
|
||
| // --------------------------------------------------------------------------- | ||
| // restoreCheckpoint | ||
| // --------------------------------------------------------------------------- | ||
|
|
||
| /** | ||
| * Restore state from a checkpoint | ||
| * @param {string} timelineId - Timeline ID | ||
| * @param {string} checkpointId - Checkpoint ID | ||
| * @returns {Promise<Object>} Restored checkpoint data | ||
| */ | ||
| async restoreCheckpoint(timelineId, checkpointId) { | ||
| const timeline = this._getTimeline(timelineId); | ||
| const checkpoint = this._getCheckpoint(timeline, checkpointId); | ||
|
|
||
| this._stats.restoresPerformed += 1; | ||
|
|
||
| this.emit('checkpoint:restored', { | ||
| timelineId, | ||
| checkpointId, | ||
| label: checkpoint.label, | ||
| index: checkpoint.index, | ||
| }); | ||
|
|
||
| return { | ||
| checkpointId: checkpoint.id, | ||
| state: deepClone(checkpoint.state), | ||
| label: checkpoint.label, | ||
| index: checkpoint.index, | ||
| timestamp: checkpoint.timestamp, | ||
| }; | ||
| } | ||
|
|
||
| // --------------------------------------------------------------------------- | ||
| // fork | ||
| // --------------------------------------------------------------------------- | ||
|
|
||
| /** | ||
| * Fork a timeline from a specific checkpoint | ||
| * @param {string} timelineId - Source timeline ID | ||
| * @param {string} checkpointId - Checkpoint to fork from | ||
| * @param {Object} [metadata={}] - Fork metadata | ||
| * @returns {Promise<Object>} New forked timeline | ||
| */ | ||
| async fork(timelineId, checkpointId, metadata = {}) { | ||
| const sourceTimeline = this._getTimeline(timelineId); | ||
| const checkpoint = this._getCheckpoint(sourceTimeline, checkpointId); | ||
|
|
||
| const cpIndex = sourceTimeline.checkpoints.indexOf(checkpoint); | ||
| const copiedCheckpoints = sourceTimeline.checkpoints | ||
| .slice(0, cpIndex + 1) | ||
| .map((cp) => deepClone(cp)); | ||
|
|
||
| const forkId = generateId('tl'); | ||
| const now = new Date().toISOString(); | ||
|
|
||
| // Update copied checkpoints to reference the new timeline | ||
| copiedCheckpoints.forEach((cp) => { | ||
| cp.timelineId = forkId; | ||
| }); | ||
|
|
||
| const forkTimeline = { | ||
| id: forkId, | ||
| sessionId: sourceTimeline.sessionId, | ||
| parentId: timelineId, | ||
| parentCheckpointId: checkpointId, | ||
| metadata: deepClone(metadata), | ||
| checkpoints: copiedCheckpoints, | ||
| forks: [], | ||
| createdAt: now, | ||
| updatedAt: now, | ||
| status: TimelineStatus.ACTIVE, | ||
| }; | ||
|
|
||
| this.timelines.set(forkId, forkTimeline); | ||
|
|
||
| // Register fork on source timeline | ||
| sourceTimeline.forks.push({ | ||
| timelineId: forkId, | ||
| checkpointId, | ||
| createdAt: now, | ||
| }); | ||
| sourceTimeline.updatedAt = now; | ||
|
|
||
| this._stats.timelinesCreated += 1; | ||
| this._stats.forksCreated += 1; | ||
|
|
||
| this.emit('timeline:forked', { | ||
| sourceTimelineId: timelineId, | ||
| forkTimelineId: forkId, | ||
| checkpointId, | ||
| metadata: deepClone(metadata), | ||
| }); | ||
|
|
||
| await this._persist(sourceTimeline); | ||
| await this._persist(forkTimeline); | ||
|
|
||
| return deepClone(forkTimeline); | ||
| } | ||
|
|
||
| // --------------------------------------------------------------------------- | ||
| // rewind | ||
| // --------------------------------------------------------------------------- | ||
|
|
||
| /** | ||
| * Rewind a timeline to a previous checkpoint | ||
| * Marks all checkpoints after the target as 'rewound' | ||
| * @param {string} timelineId - Timeline ID | ||
| * @param {string} checkpointId - Target checkpoint ID | ||
| * @returns {Promise<Object>} Rewind result | ||
| */ | ||
| async rewind(timelineId, checkpointId) { | ||
| const timeline = this._getTimeline(timelineId); | ||
| const checkpoint = this._getCheckpoint(timeline, checkpointId); | ||
|
|
||
| const cpIndex = timeline.checkpoints.indexOf(checkpoint); | ||
| const rewoundCheckpoints = []; | ||
|
|
||
| for (let i = cpIndex + 1; i < timeline.checkpoints.length; i++) { | ||
| const cp = timeline.checkpoints[i]; | ||
| if (cp.status === CheckpointStatus.ACTIVE) { | ||
| cp.status = CheckpointStatus.REWOUND; | ||
| rewoundCheckpoints.push(cp.id); | ||
| } | ||
| } | ||
|
|
||
| timeline.updatedAt = new Date().toISOString(); | ||
| this._stats.rewindsPerformed += 1; | ||
|
|
||
| this.emit('timeline:rewound', { | ||
| timelineId, | ||
| checkpointId, | ||
| rewoundCheckpoints, | ||
| }); | ||
|
|
||
| await this._persist(timeline); | ||
|
|
||
| return { | ||
| timelineId, | ||
| checkpointId, | ||
| state: deepClone(checkpoint.state), | ||
| rewoundCheckpoints, | ||
| }; | ||
| } | ||
|
|
||
| // --------------------------------------------------------------------------- | ||
| // getReplayPlan | ||
| // --------------------------------------------------------------------------- | ||
|
|
||
| /** | ||
| * Get a plan to replay between two checkpoints | ||
| * @param {string} timelineId - Timeline ID | ||
| * @param {string} fromCheckpointId - Starting checkpoint ID | ||
| * @param {string} toCheckpointId - Ending checkpoint ID | ||
| * @returns {Object} Replay plan with steps | ||
| */ | ||
| getReplayPlan(timelineId, fromCheckpointId, toCheckpointId) { | ||
| const timeline = this._getTimeline(timelineId); | ||
| const fromCp = this._getCheckpoint(timeline, fromCheckpointId); | ||
| const toCp = this._getCheckpoint(timeline, toCheckpointId); | ||
|
|
||
| const fromIndex = timeline.checkpoints.indexOf(fromCp); | ||
| const toIndex = timeline.checkpoints.indexOf(toCp); | ||
|
|
||
| if (fromIndex >= toIndex) { | ||
| throw new Error('fromCheckpoint must precede toCheckpoint in the timeline'); | ||
| } | ||
|
|
||
| const steps = timeline.checkpoints.slice(fromIndex + 1, toIndex + 1).map((cp) => ({ | ||
| checkpointId: cp.id, | ||
| label: cp.label, | ||
| index: cp.index, | ||
| status: cp.status, | ||
| timestamp: cp.timestamp, | ||
| })); | ||
|
|
||
| return { | ||
| timelineId, | ||
| from: { | ||
| checkpointId: fromCp.id, | ||
| label: fromCp.label, | ||
| index: fromCp.index, | ||
| }, | ||
| to: { | ||
| checkpointId: toCp.id, | ||
| label: toCp.label, | ||
| index: toCp.index, | ||
| }, | ||
| totalSteps: steps.length, | ||
| steps, | ||
| }; | ||
| } | ||
|
|
||
| // --------------------------------------------------------------------------- | ||
| // compareTimelines | ||
| // --------------------------------------------------------------------------- | ||
|
|
||
| /** | ||
| * Compare two timelines (useful for forks) | ||
| * @param {string} timelineId1 - First timeline ID | ||
| * @param {string} timelineId2 - Second timeline ID | ||
| * @returns {Object} Comparison result | ||
| */ | ||
| compareTimelines(timelineId1, timelineId2) { | ||
| const tl1 = this._getTimeline(timelineId1); | ||
| const tl2 = this._getTimeline(timelineId2); | ||
|
|
||
| // Find shared checkpoints using lineage (parent relationship) | ||
| const sharedCheckpoints = []; | ||
| let commonAncestorIndex = -1; | ||
| let forkPointIndex = 0; | ||
|
|
||
| // Check if tl2 is a fork of tl1 (or vice versa) | ||
| if (tl2.parentId === timelineId1) { | ||
| // tl2 was forked from tl1 — use parentCheckpointId to find fork point | ||
| const forkCpId = tl2.parentCheckpointId; | ||
| const forkCpIdx = tl1.checkpoints.findIndex((cp) => cp.id === forkCpId); | ||
|
|
||
| if (forkCpIdx >= 0) { | ||
| for (let i = 0; i <= forkCpIdx; i++) { | ||
| sharedCheckpoints.push({ | ||
| index: i, | ||
| label: tl1.checkpoints[i].label, | ||
| state: deepClone(tl1.checkpoints[i].state), | ||
| }); | ||
| commonAncestorIndex = i; | ||
| } | ||
| forkPointIndex = forkCpIdx + 1; | ||
| } | ||
| } else if (tl1.parentId === timelineId2) { | ||
| // tl1 was forked from tl2 — use parentCheckpointId to find fork point | ||
| const forkCpId = tl1.parentCheckpointId; | ||
| const forkCpIdx = tl2.checkpoints.findIndex((cp) => cp.id === forkCpId); | ||
|
|
||
| if (forkCpIdx >= 0) { | ||
| for (let i = 0; i <= forkCpIdx; i++) { | ||
| sharedCheckpoints.push({ | ||
| index: i, | ||
| label: tl2.checkpoints[i].label, | ||
| state: deepClone(tl2.checkpoints[i].state), | ||
| }); | ||
| commonAncestorIndex = i; | ||
| } | ||
| forkPointIndex = forkCpIdx + 1; | ||
| } | ||
| } | ||
| // For unrelated timelines, sharedCheckpoints stays empty | ||
|
|
||
| // Divergent checkpoints | ||
| const onlyInTimeline1 = tl1.checkpoints.slice(forkPointIndex).map((cp) => ({ | ||
| checkpointId: cp.id, | ||
| label: cp.label, | ||
| index: cp.index, | ||
| })); | ||
|
|
||
| const onlyInTimeline2 = tl2.checkpoints.slice(sharedCheckpoints.length).map((cp) => ({ | ||
| checkpointId: cp.id, | ||
| label: cp.label, | ||
| index: cp.index, | ||
| })); | ||
|
|
||
| return { | ||
| timeline1: { | ||
| id: tl1.id, | ||
| sessionId: tl1.sessionId, | ||
| totalCheckpoints: tl1.checkpoints.length, | ||
| }, | ||
| timeline2: { | ||
| id: tl2.id, | ||
| sessionId: tl2.sessionId, | ||
| totalCheckpoints: tl2.checkpoints.length, | ||
| }, | ||
| sharedCheckpoints, | ||
| commonAncestorIndex, | ||
| divergentCheckpoints: { | ||
| onlyInTimeline1, | ||
| onlyInTimeline2, | ||
| }, | ||
| }; | ||
| } | ||
|
|
||
| // --------------------------------------------------------------------------- | ||
| // getTimelineTree | ||
| // --------------------------------------------------------------------------- | ||
|
|
||
| /** | ||
| * Get the full tree of a timeline and its forks | ||
| * @param {string} timelineId - Root timeline ID | ||
| * @returns {Object} Tree structure | ||
| */ | ||
| getTimelineTree(timelineId) { | ||
| const timeline = this._getTimeline(timelineId); | ||
|
|
||
| const activeCheckpointCount = timeline.checkpoints.filter( | ||
| (cp) => cp.status === CheckpointStatus.ACTIVE | ||
| ).length; | ||
|
|
||
| const children = timeline.forks | ||
| .map((fork) => { | ||
| if (this.timelines.has(fork.timelineId)) { | ||
| return this.getTimelineTree(fork.timelineId); | ||
| } | ||
| return null; | ||
| }) | ||
| .filter(Boolean); | ||
|
|
||
| return { | ||
| id: timeline.id, | ||
| sessionId: timeline.sessionId, | ||
| parentId: timeline.parentId, | ||
| status: timeline.status, | ||
| checkpointCount: timeline.checkpoints.length, | ||
| activeCheckpointCount, | ||
| forkCount: timeline.forks.length, | ||
| children, | ||
| createdAt: timeline.createdAt, | ||
| }; | ||
| } | ||
|
|
||
| // --------------------------------------------------------------------------- | ||
| // listTimelines | ||
| // --------------------------------------------------------------------------- | ||
|
|
||
| /** | ||
| * List all timelines with optional filtering | ||
| * @param {Object} [filter={}] - Filter options | ||
| * @param {string} [filter.sessionId] - Filter by session ID | ||
| * @param {string} [filter.status] - Filter by status | ||
| * @param {string} [filter.parentId] - Filter by parent ID | ||
| * @returns {Promise<Object[]>} Filtered timeline list | ||
| */ | ||
| async listTimelines(filter = {}) { | ||
| // If no timelines loaded, try to load from disk | ||
| if (this.timelines.size === 0) { | ||
| await this._loadFromDisk(); | ||
| } | ||
|
|
||
| let timelines = Array.from(this.timelines.values()); | ||
|
|
||
| if (filter.sessionId) { | ||
| timelines = timelines.filter((tl) => tl.sessionId === filter.sessionId); | ||
| } | ||
|
|
||
| if (filter.status) { | ||
| timelines = timelines.filter((tl) => tl.status === filter.status); | ||
| } | ||
|
|
||
| if (filter.parentId) { | ||
| timelines = timelines.filter((tl) => tl.parentId === filter.parentId); | ||
| } | ||
|
|
||
| return timelines.map((tl) => deepClone(tl)); | ||
| } | ||
|
|
||
| // --------------------------------------------------------------------------- | ||
| // deleteTimeline | ||
| // --------------------------------------------------------------------------- | ||
|
|
||
| /** | ||
| * Delete a timeline | ||
| * @param {string} timelineId - Timeline ID to delete | ||
| * @param {Object} [options={}] - Delete options | ||
| * @param {boolean} [options.deleteForks=false] - Also delete all forks | ||
| * @returns {Promise<Object>} Deletion result | ||
| */ | ||
| async deleteTimeline(timelineId, options = {}) { | ||
| const timeline = this._getTimeline(timelineId); | ||
| const deleted = []; | ||
|
|
||
| // Recursively delete forks if requested | ||
| if (options.deleteForks) { | ||
| for (const fork of timeline.forks) { | ||
| if (this.timelines.has(fork.timelineId)) { | ||
| const result = await this.deleteTimeline(fork.timelineId, { deleteForks: true }); | ||
| deleted.push(...result.deleted); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // Remove fork reference from parent | ||
| if (timeline.parentId) { | ||
| const parent = this.timelines.get(timeline.parentId); | ||
| if (parent) { | ||
| parent.forks = parent.forks.filter((f) => f.timelineId !== timelineId); | ||
| parent.updatedAt = new Date().toISOString(); | ||
| await this._persist(parent); | ||
| } | ||
| } | ||
|
|
||
| // Remove from memory | ||
| this.timelines.delete(timelineId); | ||
| deleted.push(timelineId); | ||
|
|
||
| // Remove from disk | ||
| try { | ||
| const filePath = path.join(this.storageDir, `${timelineId}.json`); | ||
| await fs.promises.unlink(filePath); | ||
| } catch (_err) { | ||
| // File may not exist, that's fine | ||
| } | ||
|
|
||
| return { deleted, count: deleted.length }; | ||
| } | ||
|
|
||
| // --------------------------------------------------------------------------- | ||
| // getStats | ||
| // --------------------------------------------------------------------------- | ||
|
|
||
| /** | ||
| * Get engine statistics | ||
| * @returns {Object} Statistics | ||
| */ | ||
| getStats() { | ||
| let totalCheckpoints = 0; | ||
| let activeTimelines = 0; | ||
|
|
||
| for (const tl of this.timelines.values()) { | ||
| totalCheckpoints += tl.checkpoints.length; | ||
| if (tl.status === TimelineStatus.ACTIVE) { | ||
| activeTimelines += 1; | ||
| } | ||
| } | ||
|
|
||
| return { | ||
| ...this._stats, | ||
| totalTimelines: this.timelines.size, | ||
| activeTimelines, | ||
| totalCheckpoints, | ||
| }; | ||
| } | ||
|
|
||
| // --------------------------------------------------------------------------- | ||
| // Private helpers | ||
| // --------------------------------------------------------------------------- | ||
|
|
||
| /** | ||
| * Get a timeline by ID or throw | ||
| * @private | ||
| * @param {string} timelineId - Timeline ID | ||
| * @returns {Object} Timeline object | ||
| */ | ||
| _getTimeline(timelineId) { | ||
| const timeline = this.timelines.get(timelineId); | ||
| if (!timeline) { | ||
| throw new Error(`Timeline not found: ${timelineId}`); | ||
| } | ||
| return timeline; | ||
| } | ||
|
|
||
| /** | ||
| * Get a checkpoint from a timeline or throw | ||
| * @private | ||
| * @param {Object} timeline - Timeline object | ||
| * @param {string} checkpointId - Checkpoint ID | ||
| * @returns {Object} Checkpoint object | ||
| */ | ||
| _getCheckpoint(timeline, checkpointId) { | ||
| const checkpoint = timeline.checkpoints.find((cp) => cp.id === checkpointId); | ||
| if (!checkpoint) { | ||
| throw new Error(`Checkpoint not found: ${checkpointId}`); | ||
| } | ||
| return checkpoint; | ||
| } | ||
|
|
||
| /** | ||
| * Persist a timeline to disk | ||
| * @private | ||
| * @param {Object} timeline - Timeline to persist | ||
| */ | ||
| async _persist(timeline) { | ||
| if (!this.autoPersist) return; | ||
|
|
||
| try { | ||
| await fs.promises.mkdir(this.storageDir, { recursive: true }); | ||
| const filePath = path.join(this.storageDir, `${timeline.id}.json`); | ||
| await fs.promises.writeFile(filePath, JSON.stringify(timeline, null, 2), 'utf-8'); | ||
| } catch (error) { | ||
| if (this.listenerCount('error') > 0) { | ||
| this.emit('error', { | ||
| operation: 'persist', | ||
| timelineId: timeline.id, | ||
| error: error.message, | ||
| }); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Load all timelines from disk (sync, used by constructor) | ||
| * @private | ||
| */ | ||
| _loadFromDiskSync() { | ||
| if (this._loaded) return; | ||
| try { | ||
| const files = fs.readdirSync(this.storageDir); | ||
| const jsonFiles = files.filter((f) => f.endsWith('.json')); | ||
|
|
||
| for (const file of jsonFiles) { | ||
| try { | ||
| const filePath = path.join(this.storageDir, file); | ||
| const data = fs.readFileSync(filePath, 'utf-8'); | ||
| const timeline = JSON.parse(data); | ||
| if (timeline.id) { | ||
| this.timelines.set(timeline.id, timeline); | ||
| } | ||
| } catch (_err) { | ||
| // Skip corrupt files | ||
| } | ||
| } | ||
| } catch (_err) { | ||
| // Directory may not exist yet | ||
| } | ||
| this._loaded = true; | ||
| } | ||
|
|
||
| /** | ||
| * Load all timelines from disk | ||
| * @private | ||
| */ | ||
| async _loadFromDisk() { | ||
| if (this._loaded) return; | ||
| try { | ||
| const files = await fs.promises.readdir(this.storageDir); | ||
| const jsonFiles = files.filter((f) => f.endsWith('.json')); | ||
|
|
||
| for (const file of jsonFiles) { | ||
| try { | ||
| const filePath = path.join(this.storageDir, file); | ||
| const data = await fs.promises.readFile(filePath, 'utf-8'); | ||
| const timeline = JSON.parse(data); | ||
| if (timeline.id) { | ||
| this.timelines.set(timeline.id, timeline); | ||
| } | ||
| } catch (_err) { | ||
| // Skip corrupt files | ||
| } | ||
| } | ||
| } catch (_err) { | ||
| // Directory may not exist yet | ||
| } | ||
| this._loaded = true; | ||
| } | ||
| } | ||
|
|
||
| module.exports = TimeTravelEngine; | ||
| module.exports.TimeTravelEngine = TimeTravelEngine; | ||
| module.exports.TimelineStatus = TimelineStatus; | ||
| module.exports.CheckpointStatus = CheckpointStatus; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== time-travel files =="
fd '^time-travel\.js$' . --hidden
echo
echo "== references to .aios-core =="
rg -n '\.aios-core|aios-core/core/execution/time-travel' . --hidden
echo
echo "== manifest references =="
rg -n 'core/execution/time-travel\.js|aios-core' .aiox-core/install-manifest.yaml tests --hiddenRepository: SynkraAI/aiox-core
Length of output: 203
Remove the duplicate module under .aios-core/ — it's untracked and will drift from the canonical version.
The codebase has two copies of time-travel.js: one at .aios-core/core/execution/time-travel.js (this file) and one at .aiox-core/core/execution/time-travel.js (the real module). The .aios-core directory has no references in the codebase, tests, or manifest, making it an orphaned duplicate that can diverge immediately. Remove .aios-core/ or clarify its purpose if it's intentional.
🧰 Tools
🪛 GitHub Check: ESLint
[warning] 489-489:
Missing trailing comma
[warning] 165-165:
Missing trailing comma
🪛 GitHub Check: Performance Metrics
[warning] 489-489:
Missing trailing comma
[warning] 165-165:
Missing trailing comma
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.aios-core/core/execution/time-travel.js around lines 1 - 742, This file
contains an orphaned duplicate of the TimeTravelEngine module (symbols:
TimeTravelEngine, TimelineStatus, CheckpointStatus, generateId, deepClone,
_persist/_loadFromDisk/_loadFromDiskSync) and should be removed to avoid
divergence from the canonical implementation; delete this duplicate file, ensure
only the canonical module remains in the repo, and scan/update any
imports/usages to reference the canonical module path so there are no remaining
references to the removed copy.
| // Check if tl2 is a fork of tl1 (or vice versa) | ||
| if (tl2.parentId === timelineId1) { | ||
| // tl2 was forked from tl1 — use parentCheckpointId to find fork point | ||
| const forkCpId = tl2.parentCheckpointId; | ||
| const forkCpIdx = tl1.checkpoints.findIndex((cp) => cp.id === forkCpId); | ||
|
|
||
| if (forkCpIdx >= 0) { | ||
| for (let i = 0; i <= forkCpIdx; i++) { | ||
| sharedCheckpoints.push({ | ||
| index: i, | ||
| label: tl1.checkpoints[i].label, | ||
| state: deepClone(tl1.checkpoints[i].state), | ||
| }); | ||
| commonAncestorIndex = i; | ||
| } | ||
| forkPointIndex = forkCpIdx + 1; | ||
| } | ||
| } else if (tl1.parentId === timelineId2) { | ||
| // tl1 was forked from tl2 — use parentCheckpointId to find fork point | ||
| const forkCpId = tl1.parentCheckpointId; | ||
| const forkCpIdx = tl2.checkpoints.findIndex((cp) => cp.id === forkCpId); | ||
|
|
||
| if (forkCpIdx >= 0) { | ||
| for (let i = 0; i <= forkCpIdx; i++) { | ||
| sharedCheckpoints.push({ | ||
| index: i, | ||
| label: tl2.checkpoints[i].label, | ||
| state: deepClone(tl2.checkpoints[i].state), | ||
| }); | ||
| commonAncestorIndex = i; | ||
| } | ||
| forkPointIndex = forkCpIdx + 1; | ||
| } | ||
| } | ||
| // For unrelated timelines, sharedCheckpoints stays empty |
There was a problem hiding this comment.
Handle sibling and multi-level fork ancestry in compareTimelines().
Line 408 only recognizes a direct parent/child pair. Comparing sibling forks or a fork-of-a-fork against its root currently falls through to “no shared checkpoints”, even though they clearly share lineage. That breaks the advertised nested-fork use case and makes compareTimelines() unreliable outside the simplest case.
Please walk each timeline’s parentId chain to a common ancestor, then derive the shared checkpoint prefix from the relevant parentCheckpointId. I’d also add regression coverage for sibling forks and grandchild forks in the same change.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.aiox-core/core/execution/time-travel.js around lines 407 - 441, The current
compareTimelines() only checks a single-level parent/child and misses sibling or
multi-level fork ancestry; update the logic that inspects tl1/tl2 (and their
parentId chains) to walk each timeline’s parentId chain upward until you find a
common ancestor timeline id, then use the matching timeline’s parentCheckpointId
at the fork edge to locate the fork checkpoint index in the ancestor’s
checkpoints and populate sharedCheckpoints (and set commonAncestorIndex and
forkPointIndex) from that ancestor’s checkpoint prefix (using deepClone for
state as now); ensure you reference and update the same symbols
(compareTimelines, tl1, tl2, parentId, parentCheckpointId, sharedCheckpoints,
commonAncestorIndex, forkPointIndex) and add regression tests exercising sibling
forks and grandchild (fork-of-fork) scenarios so the new path is covered.
.aiox-core/install-manifest.yaml
Outdated
| - path: core/orchestration/cognitive-load-balancer.js | ||
| hash: sha256:30ca230b9998a4d855bdb7095d6b51cdfb1797c94073500a7d867bf7c73586dd | ||
| type: core | ||
| size: 35104 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
for name in cognitive-load-balancer.js swarm-intelligence.js; do
echo "== $name =="
fd "^${name//./\\.}$" .aiox-core --type f
echo
doneRepository: SynkraAI/aiox-core
Length of output: 124
Manifest entries reference non-existent files.
core/orchestration/cognitive-load-balancer.js and core/orchestration/swarm-intelligence.js do not exist in the repository. These manifest entries (lines 835-838 and 943-946) will fail validation and advertise non-existent modules if the files are not added in this PR.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.aiox-core/install-manifest.yaml around lines 835 - 838, The manifest
includes entries for cognitive-load-balancer.js and swarm-intelligence.js that
do not exist in the repo; either add the missing modules (create the
corresponding cognitive-load-balancer.js and swarm-intelligence.js source files
and update the manifest with their correct sha256 hashes and sizes) or
remove/update the two manifest entries so they no longer reference those
non-existent files; ensure the manifest's hash/size fields match the added files
if you choose to add them, or delete the entries entirely if those modules are
not part of this release.
| readdir: jest.fn().mockImplementation(() => { | ||
| const files = []; | ||
| for (const key of store.keys()) { | ||
| const parts = key.split('/'); | ||
| files.push(parts[parts.length - 1]); | ||
| } | ||
| return Promise.resolve(files); |
There was a problem hiding this comment.
The fs mock is POSIX-only.
Both mock directory readers split paths with '/'. On Windows, path.join() will produce backslashes, so these tests can stop finding persisted files even though the implementation is correct. Please use path.basename() instead of manual splitting.
Minimal fix
jest.mock('fs', () => {
const actualFs = jest.requireActual('fs');
+ const actualPath = jest.requireActual('path');
const store = new Map();
@@
readdir: jest.fn().mockImplementation(() => {
const files = [];
for (const key of store.keys()) {
- const parts = key.split('/');
- files.push(parts[parts.length - 1]);
+ files.push(actualPath.basename(key));
}
return Promise.resolve(files);
}),
@@
const readdirSync = jest.fn().mockImplementation((dir) => {
const files = [];
for (const key of store.keys()) {
- const parts = key.split('/');
- files.push(parts[parts.length - 1]);
+ files.push(actualPath.basename(key));
}
return files;
});Also applies to: 48-55
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/core/execution/time-travel.test.js` around lines 29 - 35, The mock
implementation for readdir (the jest.fn() mock that iterates store.keys() and
splits each key with '/' to extract the filename) is POSIX-only and fails on
Windows; replace the manual split and parts[parts.length - 1] logic with
path.basename(...) to correctly get the filename across platforms, and apply the
same change to the other similar mock implementation used later in the file that
also splits with '/'.
…ay/fork/rewind Implementa o motor de Time Travel para execuções de agentes (Story EXE-4). Permite criar timelines, checkpoints, forks e rewinds — como git para sessões. - createTimeline/checkpoint/fork/rewind/restoreCheckpoint - getReplayPlan para replay entre checkpoints - compareTimelines para comparar forks - getTimelineTree para visualizar árvore de forks - Persistência em .aiox/timelines/ (JSON por timeline) - 85 testes cobrindo todos os métodos e edge cases
- Protege emit('error') com listenerCount para evitar throw sem listener
- Substitui JSON.parse/stringify por structuredClone com fallback JSON
- Adiciona carregamento sincrono de timelines no construtor (_loadFromDiskSync)
- Compara timelines por linhagem (parentId) em vez de estado serializado
- Usa comprimento total de checkpoints ao inves de contagem de ativos para o limite
- Corrige callback forEach para compatibilidade com Biome lint
- Adiciona teste para falha de persistencia sem listener de erro
c042bb5 to
cf215e9
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
tests/core/execution/time-travel.test.js (1)
29-35:⚠️ Potential issue | 🟡 MinorMake the fs mock filename extraction platform-neutral.
Both directory readers peel filenames off with
key.split('/'), so these tests can stop seeing persisted files on Windows. Usepath.basename(key)in both mocks instead.Minimal fix
jest.mock('fs', () => { const actualFs = jest.requireActual('fs'); + const actualPath = jest.requireActual('path'); const store = new Map(); @@ readdir: jest.fn().mockImplementation(() => { const files = []; for (const key of store.keys()) { - const parts = key.split('/'); - files.push(parts[parts.length - 1]); + files.push(actualPath.basename(key)); } return Promise.resolve(files); }), @@ const readdirSync = jest.fn().mockImplementation((dir) => { const files = []; for (const key of store.keys()) { - const parts = key.split('/'); - files.push(parts[parts.length - 1]); + files.push(actualPath.basename(key)); } return files; });Also applies to: 48-55
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/execution/time-travel.test.js` around lines 29 - 35, The tests use a platform-dependent filename extraction in the fs mocks (readdir: jest.fn().mockImplementation(() => { ... key.split('/') ... })) which breaks on Windows; change both mocks to use path.basename(key) instead of key.split('/'), and ensure the test file imports Node's path module (e.g., const path = require('path')) so the mocks in time-travel.test.js correctly produce filenames on all platforms (apply the same change to the second mock block around the 48-55 region)..aiox-core/core/execution/time-travel.js (2)
407-441:⚠️ Potential issue | 🟠 MajorHandle more than one fork hop in
compareTimelines().This still only recognizes a direct parent/child pair. Comparing sibling forks, or a fork-of-a-fork against its root, will report no shared checkpoints even though the copied checkpoint lineage is shared. Please derive the common prefix from preserved checkpoint identity / ancestor chain instead of a one-hop
parentIdcheck, and add regression coverage for sibling and multi-level fork comparisons.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.aiox-core/core/execution/time-travel.js around lines 407 - 441, compareTimelines currently only checks a one-hop parent/child relationship (using tl1.parentId / tl2.parentId and parentCheckpointId) so sibling forks or multi-level forks are missed; update compareTimelines to walk each timeline's ancestor chain using the parentId and parentCheckpointId fields to build the full lineage of checkpoint IDs (for tl1 and tl2), find the longest common prefix of checkpoint identity between those lineages, populate sharedCheckpoints by iterating that common prefix (using checkpoints and deepClone) and set commonAncestorIndex and forkPointIndex accordingly instead of the single-hop logic, and add regression tests that cover sibling forks and fork-of-a-fork comparisons to ensure shared checkpoints are detected.
98-101:⚠️ Potential issue | 🟠 MajorDon't couple timeline hydration to write mode or current map size.
When
autoPersistisfalse, the constructor skips disk loading, andlistTimelines()only hydrates ifthis.timelines.size === 0. Creating one in-memory timeline first—or callingrestoreCheckpoint(),fork(),rewind(),compareTimelines(), etc. directly—can leave persisted timelines permanently inaccessible in that engine instance.Minimal fix
- if (this.autoPersist) { - this._loadFromDiskSync(); - } + this._loadFromDiskSync(); @@ - if (this.timelines.size === 0) { + if (!this._loaded) { await this._loadFromDisk(); }As per coding guidelines,
.aiox-core/core/**: Ensure backwards compatibility — core modules are consumed by all agents.Also applies to: 526-530
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.aiox-core/core/execution/time-travel.js around lines 98 - 101, The constructor currently conditions disk hydration on this.autoPersist and listTimelines depends on this.timelines.size, which can leave persisted timelines inaccessible; update time-travel.js so disk loading is decoupled from write-mode and map size: call this._loadFromDiskSync() (or its async counterpart) unconditionally during initialization (or set a one-time flag like this._hydratedFromDisk and ensure _loadFromDiskSync runs once), and change listTimelines() to not gate hydration on this.timelines.size but instead check the one-time hydration flag and trigger disk hydration if not yet done; ensure methods mentioned (restoreCheckpoint, fork, rewind, compareTimelines) rely on the hydrated state/flag rather than map size so persisted timelines are always discoverable..aios-core/core/execution/time-travel.js (1)
1-742:⚠️ Potential issue | 🟠 MajorRemove the stray
.aios-corecopy oftime-travel.js.This file duplicates the canonical implementation under
.aiox-core/core/execution/time-travel.js, while the manifest and tests added in this PR target only the.aiox-corepath. Keeping both copies will drift immediately and makes future fixes easy to miss.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.aios-core/core/execution/time-travel.js around lines 1 - 742, Delete the stray duplicate file that defines and exports TimeTravelEngine (and the TimelineStatus and CheckpointStatus enums — e.g. the file that ends with "module.exports = TimeTravelEngine" / "module.exports.TimelineStatus" / "module.exports.CheckpointStatus"); remove the file from the repo and from any manifest or test entries so only the canonical implementation remains, search for and update any require/import references to point to the canonical module, and run tests to confirm nothing still references the deleted duplicate.
🧹 Nitpick comments (1)
tests/core/execution/time-travel.test.js (1)
6-10: Use the project's absolute import convention here.This
require('../../../...')hard-codes directory depth and makes the test brittle to file moves. Please switch it to the repo's absolute module path form instead of walking up the tree.As per coding guidelines,
**/*.{js,jsx,ts,tsx}: Use absolute imports instead of relative imports in all code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/core/execution/time-travel.test.js` around lines 6 - 10, Replace the brittle relative require that walks up the tree and instead import the TimeTravelEngine, TimelineStatus, and CheckpointStatus using the project's absolute module path convention (i.e., require the module via the repo's absolute path rather than '../../../.aiox-core/core/execution/time-travel'); update the top of the test to require the same three symbols (TimeTravelEngine, TimelineStatus, CheckpointStatus) from the absolute module path used elsewhere in the project.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.aiox-core/core/execution/time-travel.js:
- Around line 54-57: The persistence layer currently uses JSON.stringify/parse
while deepClone was expanded to accept structured-cloneable values, causing
non-JSON types to be lost or crash; update the implementation so persistence
matches deepClone: either (A) validate and enforce JSON-safe timeline/checkpoint
inputs at public boundaries (reject or convert Date/Map/Set/BigInt/undefined
before storing) or (B) replace JSON.stringify/JSON.parse in _persist() and the
corresponding loader/rehydration code with a serializer that preserves
Date/Map/Set/BigInt (e.g., Node's v8.serialize/v8.deserialize or a library such
as superjson/serialize-javascript/devalue), and ensure deepClone(), _persist(),
and all timeline load/restore routines use the same serialization strategy so
round-trips are preserved.
---
Duplicate comments:
In @.aios-core/core/execution/time-travel.js:
- Around line 1-742: Delete the stray duplicate file that defines and exports
TimeTravelEngine (and the TimelineStatus and CheckpointStatus enums — e.g. the
file that ends with "module.exports = TimeTravelEngine" /
"module.exports.TimelineStatus" / "module.exports.CheckpointStatus"); remove the
file from the repo and from any manifest or test entries so only the canonical
implementation remains, search for and update any require/import references to
point to the canonical module, and run tests to confirm nothing still references
the deleted duplicate.
In @.aiox-core/core/execution/time-travel.js:
- Around line 407-441: compareTimelines currently only checks a one-hop
parent/child relationship (using tl1.parentId / tl2.parentId and
parentCheckpointId) so sibling forks or multi-level forks are missed; update
compareTimelines to walk each timeline's ancestor chain using the parentId and
parentCheckpointId fields to build the full lineage of checkpoint IDs (for tl1
and tl2), find the longest common prefix of checkpoint identity between those
lineages, populate sharedCheckpoints by iterating that common prefix (using
checkpoints and deepClone) and set commonAncestorIndex and forkPointIndex
accordingly instead of the single-hop logic, and add regression tests that cover
sibling forks and fork-of-a-fork comparisons to ensure shared checkpoints are
detected.
- Around line 98-101: The constructor currently conditions disk hydration on
this.autoPersist and listTimelines depends on this.timelines.size, which can
leave persisted timelines inaccessible; update time-travel.js so disk loading is
decoupled from write-mode and map size: call this._loadFromDiskSync() (or its
async counterpart) unconditionally during initialization (or set a one-time flag
like this._hydratedFromDisk and ensure _loadFromDiskSync runs once), and change
listTimelines() to not gate hydration on this.timelines.size but instead check
the one-time hydration flag and trigger disk hydration if not yet done; ensure
methods mentioned (restoreCheckpoint, fork, rewind, compareTimelines) rely on
the hydrated state/flag rather than map size so persisted timelines are always
discoverable.
In `@tests/core/execution/time-travel.test.js`:
- Around line 29-35: The tests use a platform-dependent filename extraction in
the fs mocks (readdir: jest.fn().mockImplementation(() => { ... key.split('/')
... })) which breaks on Windows; change both mocks to use path.basename(key)
instead of key.split('/'), and ensure the test file imports Node's path module
(e.g., const path = require('path')) so the mocks in time-travel.test.js
correctly produce filenames on all platforms (apply the same change to the
second mock block around the 48-55 region).
---
Nitpick comments:
In `@tests/core/execution/time-travel.test.js`:
- Around line 6-10: Replace the brittle relative require that walks up the tree
and instead import the TimeTravelEngine, TimelineStatus, and CheckpointStatus
using the project's absolute module path convention (i.e., require the module
via the repo's absolute path rather than
'../../../.aiox-core/core/execution/time-travel'); update the top of the test to
require the same three symbols (TimeTravelEngine, TimelineStatus,
CheckpointStatus) from the absolute module path used elsewhere in the project.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2780e060-564b-4f8c-a147-a9071575a82e
📒 Files selected for processing (4)
.aios-core/core/execution/time-travel.js.aiox-core/core/execution/time-travel.js.aiox-core/install-manifest.yamltests/core/execution/time-travel.test.js
| function deepClone(value) { | ||
| if (value === null || value === undefined) return value; | ||
| if (typeof structuredClone === 'function') return structuredClone(value); | ||
| return JSON.parse(JSON.stringify(value)); |
There was a problem hiding this comment.
JSON persistence still breaks non-JSON checkpoint state.
deepClone() now accepts structured-cloneable values, but _persist() flattens the entire timeline through JSON.stringify() and both loaders rehydrate with JSON.parse(). Date, Map, Set, nested undefined, etc. will not round-trip across restart, and BigInt will fail persistence entirely. Either validate public inputs as JSON-safe up front or switch persistence to a serializer that preserves the types you now accept.
Also applies to: 669-670, 694-696, 722-724
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.aiox-core/core/execution/time-travel.js around lines 54 - 57, The
persistence layer currently uses JSON.stringify/parse while deepClone was
expanded to accept structured-cloneable values, causing non-JSON types to be
lost or crash; update the implementation so persistence matches deepClone:
either (A) validate and enforce JSON-safe timeline/checkpoint inputs at public
boundaries (reject or convert Date/Map/Set/BigInt/undefined before storing) or
(B) replace JSON.stringify/JSON.parse in _persist() and the corresponding
loader/rehydration code with a serializer that preserves Date/Map/Set/BigInt
(e.g., Node's v8.serialize/v8.deserialize or a library such as
superjson/serialize-javascript/devalue), and ensure deepClone(), _persist(), and
all timeline load/restore routines use the same serialization strategy so
round-trips are preserved.
Resumo
Implementa o Execution Time Travel Engine (Story EXE-4) — sistema de checkpoint/replay/fork/rewind para execuções de agentes, funcionando como "git para sessões".
Funcionalidades principais:
Detalhes técnicos:
.aiox/timelines/(um JSON por timeline)Plano de testes
Summary by CodeRabbit
New Features
Tests