Conversation
Zero-knowledge code extraction, critical assessment, roadmap reconciliation, and prescriptive blueprint for @git-stunts/git-cas. Covers all 31 source files, 61 test files, and 12 CLI files.
Add milestone M16 (Capstone, v5.3.0) to ROADMAP.md with 13 task cards addressing all negative findings from the CODE-EVAL.md forensic audit: P0 (Critical): - 16.1 Crypto adapter behavioral normalization (C8) - 16.2 Memory restore guard (C1) - 16.11 Passphrase input security (C5 + V6) P1 (Important): - 16.3 Web Crypto encryption buffer guard (C4) - 16.5 Encrypt-then-chunk dedup warning (C10) - 16.6 Chunk size upper bound (C3) - 16.10 Orphaned blob tracking (C2) P2 (Housekeeping): - 16.4 FixedChunker pre-allocated buffer (C9) - 16.7 Lifecycle method naming (deprecate deleteAsset) - 16.8 CasError portability guard - 16.9 Pre-commit hook + hooks directory - 16.12 KDF brute-force awareness (C6) - 16.13 GCM nonce collision documentation (C7) Also registers new CasError codes RESTORE_TOO_LARGE and ENCRYPTION_BUFFER_EXCEEDED, adds concerns C8–C10 to the concerns section, and cross-references all concerns to their task cards in the summary table.
CasError now checks for Error.captureStackTrace before calling it. This prevents a TypeError on runtimes where captureStackTrace is not defined (e.g. Firefox, older Deno builds). Task: 16.8
- git mv scripts/git-hooks/pre-push → scripts/hooks/pre-push - New scripts/hooks/pre-commit runs `pnpm run lint` - install-hooks.sh updated to point at scripts/hooks/ Task: 16.9
- CasError.js: add curly braces around captureStackTrace guard - CasError.test.js: remove unused vi and afterEach imports
- NodeCryptoAdapter.encryptBuffer() now async (returns Promise) - decryptBuffer() validates key on Node and Web adapters - NodeCryptoAdapter.createEncryptionStream finalize() guards with STREAM_NOT_CONSUMED before stream consumption - New CryptoAdapter.conformance.test.js asserts identical contracts Task: 16.1
CasService._restoreBuffered now checks total chunk size against
maxRestoreBufferSize (default 512 MiB) before buffering encrypted or
compressed data. Throws RESTORE_TOO_LARGE with {size, limit} meta
when exceeded. Unencrypted streaming restore is unaffected.
Task: 16.2
- New bin/ui/passphrase-prompt.js with promptPassphrase() and readPassphraseFile() - --vault-passphrase-file <path> option on store, restore, vault init - resolvePassphrase() now async with priority chain: file → flag → env → TTY prompt → undefined - vault init uses confirm: true for passphrase confirmation Task: 16.11
CasService, FixedChunker, and CdcChunker constructors now throw when chunk size exceeds 100 MiB. CasService warns when > 10 MiB. Prevents accidental creation of excessively large blobs. Task: 16.6
WebCryptoAdapter now accepts maxEncryptionBufferSize (default 512 MiB). Streaming encryption throws ENCRYPTION_BUFFER_EXCEEDED when the accumulated plaintext exceeds the limit, since Web Crypto AES-GCM is a one-shot API that buffers all data. Refactored encrypt generator into a static private method. Task: 16.3
CDC deduplication is ineffective with encryption since ciphertext is pseudorandom — content-defined boundaries provide no dedup benefit. CasService.store() now emits an observability warning for this case. Task: 16.5
STREAM_ERROR now includes meta.orphanedBlobs — an array of OIDs for blobs that were successfully persisted before the stream failed. The error metric also reports the orphanedBlobs count for observability. Resolves task 16.10.
FixedChunker.chunk() now uses a pre-allocated Buffer.allocUnsafe(chunkSize) working buffer with a copy+offset pattern, matching CdcChunker's approach. Eliminates O(n²/chunkSize) total copies when the source yields many small buffers. Resolves task 16.4.
Add inspectAsset() and collectReferencedChunks() as canonical names for deleteAsset() and findOrphanedChunks() respectively. The old names were misleading — neither method performs destructive operations. Old names are preserved as deprecated aliases that emit observability warnings. Updated on CasService, facade, and all .d.ts files. Resolves task 16.7.
CasService emits a decryption_failed metric with slug context on every INTEGRITY_ERROR during encrypted restore, providing an audit trail for monitoring failed passphrase attempts. The CLI layer adds a 1-second delay after INTEGRITY_ERROR to slow brute-force attacks. The library API itself imposes no rate-limiting — callers manage their own policy. Resolves task 16.12.
Move docs/SECURITY.md to project root with new sections covering GCM nonce bounds (2^32 NIST limit), recommended key rotation frequency, KDF parameter guidance (PBKDF2/scrypt), and passphrase entropy. VaultService now tracks encryptionCount in vault metadata, incremented on each addToVault when the vault has encryption configured. An observability warning fires when the count exceeds 2^31, providing a safety margin before the NIST 2^32 limit. VaultService accepts an optional observability port (no-op default for backward compat). Resolves task 16.13.
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (18)
📝 WalkthroughWalkthroughThis PR introduces the M16 Capstone remediation milestone, implementing 13 hardening tasks (16.1–16.13) derived from a forensic architectural audit. Changes include crypto adapter normalization, buffer and memory guards, chunk size bounds enforcement, lifecycle method renaming with deprecations, encryption usage tracking, asynchronous passphrase input security improvements, and comprehensive test coverage across all areas. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as git-cas CLI
participant PassphraseResolver
participant FileSystem
participant ProcessEnv as Process/TTY
participant Vault as VaultService
User->>CLI: Run vault command with options
activate CLI
CLI->>PassphraseResolver: resolvePassphrase(opts, extra)
activate PassphraseResolver
alt --vault-passphrase-file provided
PassphraseResolver->>FileSystem: readPassphraseFile(filePath)
FileSystem-->>PassphraseResolver: passphrase content
else --vault-passphrase flag provided
PassphraseResolver-->>PassphraseResolver: use flag value
else GIT_CAS_PASSPHRASE env set
PassphraseResolver->>ProcessEnv: read environment variable
ProcessEnv-->>PassphraseResolver: env passphrase
else stdin is TTY
PassphraseResolver->>ProcessEnv: promptPassphrase({ confirm })
ProcessEnv-->>User: Display prompt on stderr
User->>ProcessEnv: Enter passphrase (hidden)
ProcessEnv-->>PassphraseResolver: return passphrase
else
PassphraseResolver-->>PassphraseResolver: resolve to undefined
end
PassphraseResolver-->>CLI: passphrase (Promise resolved)
deactivate PassphraseResolver
CLI->>Vault: Encrypt/decrypt with passphrase
activate Vault
Vault-->>CLI: Result
deactivate Vault
deactivate CLI
CLI-->>User: Operation complete
sequenceDiagram
participant Client as Client Code
participant Adapter as CryptoAdapter
participant WebCrypto as Web Crypto API
participant State as Shared State
Client->>Adapter: createEncryptionStream(key)
activate Adapter
Adapter->>Adapter: Validate key via _validateKey()
Adapter->>Adapter: Generate nonce, init shared state
Adapter-->>Client: { encrypt, finalize }
deactivate Adapter
loop For each data chunk
Client->>Adapter: encrypt(chunk)
activate Adapter
alt Buffer exceeded?
Adapter-->>Adapter: Throw ENCRYPTION_BUFFER_EXCEEDED
Adapter-->>Client: Error
else Buffer within limit
Adapter->>WebCrypto: encryptChunk(chunk)
WebCrypto-->>Adapter: encrypted data + tag
Adapter->>State: Store computed tag
Adapter-->>Client: Yield encrypted chunk
end
deactivate Adapter
end
Client->>Adapter: finalize()
activate Adapter
alt Stream not consumed?
Adapter-->>Client: Throw STREAM_NOT_CONSUMED
else Stream consumed
Adapter->>State: Retrieve tag
Adapter->>Adapter: Construct meta { nonce, tag }
Adapter-->>Client: Return meta with base64 encoding
end
deactivate Adapter
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review please 🥕 |
|
🧠 Learnings used✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
bin/git-cas.js (1)
112-125:⚠️ Potential issue | 🟠 MajorAvoid passphrase prompting before confirming vault encryption.
resolvePassphrase()is invoked before metadata lookup, so interactive runs can prompt even when the vault is not encrypted.Proposed fix
async function resolveEncryptionKey(cas, opts) { if (opts.keyFile) { return readKeyFile(opts.keyFile); } - const passphrase = await resolvePassphrase(opts); - if (!passphrase) { - return undefined; - } const metadata = await cas.getVaultMetadata(); - if (metadata?.encryption) { - return deriveVaultKey(cas, metadata, passphrase); + const hasExplicitPassphraseSource = Boolean( + opts.vaultPassphraseFile || opts.vaultPassphrase || process.env.GIT_CAS_PASSPHRASE, + ); + if (!metadata?.encryption) { + if (hasExplicitPassphraseSource) { + process.stderr.write('warning: passphrase ignored (vault is not encrypted)\n'); + } + return undefined; } - process.stderr.write('warning: passphrase ignored (vault is not encrypted)\n'); - return undefined; + const passphrase = await resolvePassphrase(opts); + if (!passphrase) { + return undefined; + } + return deriveVaultKey(cas, metadata, passphrase); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/git-cas.js` around lines 112 - 125, The function resolveEncryptionKey currently calls resolvePassphrase before checking vault metadata which can prompt the user unnecessarily; change the flow in resolveEncryptionKey so it first returns readKeyFile(opts.keyFile) if provided, then calls cas.getVaultMetadata() and only if metadata?.encryption is truthy call resolvePassphrase and then deriveVaultKey(cas, metadata, passphrase); if metadata shows no encryption, write the warning and return undefined without prompting. Ensure you still handle the case where passphrase resolution returns falsy by returning undefined.src/domain/services/CasService.js (1)
500-525:⚠️ Potential issue | 🟠 MajorCompressed restores can exceed the configured memory cap.
At Line 500, the limit check uses pre-decompression chunk bytes. At Line 524, decompression can inflate far beyond that, bypassing the intended cap for compressed assets.
Immediate mitigation
if (manifest.compression) { buffer = await this._decompress(buffer); + if (buffer.length > this.maxRestoreBufferSize) { + throw new CasError( + `Decompressed restore is ${buffer.length} bytes (limit: ${this.maxRestoreBufferSize})`, + 'RESTORE_TOO_LARGE', + { size: buffer.length, limit: this.maxRestoreBufferSize }, + ); + } }A stronger long-term fix is bounded streaming decompression so over-limit payloads are rejected before full allocation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/services/CasService.js` around lines 500 - 525, The current size check uses pre-decompression chunk sizes (totalSize) but decompression in _decompress can inflate the payload beyond this limit; update the restore flow to enforce the maxRestoreBufferSize after decompression (and/or during streaming decompression): after Buffer.concat(await this._readAndVerifyChunks(manifest.chunks)) and after calling this._decompress(buffer), verify buffer.length <= this.maxRestoreBufferSize and throw a CasError ('RESTORE_TOO_LARGE', include size and limit) if it exceeds; for a long-term fix replace _decompress with a bounded streaming decompressor that rejects/aborts when output exceeds this.maxRestoreBufferSize to avoid unbounded memory allocation.
🧹 Nitpick comments (4)
src/infrastructure/adapters/NodeCryptoAdapter.js (1)
28-44: JSDoc return type is inconsistent with async implementation.The JSDoc on line 32 documents a synchronous return type
{ buf: Buffer, meta: ... }, but the method is nowasync(line 34), which means it returnsPromise<{ buf: Buffer, meta: ... }>. This mismatch can cause confusion for consumers relying on type hints.📝 Proposed fix to update JSDoc
/** * `@override` * `@param` {Buffer|Uint8Array} buffer - Plaintext to encrypt. * `@param` {Buffer|Uint8Array} key - 32-byte encryption key. - * `@returns` {{ buf: Buffer, meta: import('../../ports/CryptoPort.js').EncryptionMeta }} + * `@returns` {Promise<{ buf: Buffer, meta: import('../../ports/CryptoPort.js').EncryptionMeta }>} */ async encryptBuffer(buffer, key) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/infrastructure/adapters/NodeCryptoAdapter.js` around lines 28 - 44, The JSDoc for encryptBuffer documents a synchronous return but the function is async; update the `@returns` annotation for the encryptBuffer method to reflect it returns a Promise of the object (e.g., Promise<{ buf: Buffer, meta: import('../../ports/CryptoPort.js').EncryptionMeta }>) so the types match the async implementation in encryptBuffer and consumers get correct hints.test/unit/infrastructure/adapters/CryptoAdapter.conformance.test.js (2)
35-60: Consider consolidating error assertions.The pattern of using both
expect(...).rejects.toThrow(CasError)followed by a separate try/catch to verifyerr.codeis verbose. Vitest's.rejectscan be chained with property matchers.♻️ Proposed refactor for cleaner assertions
it('decryptBuffer rejects INVALID_KEY_TYPE for string key', async () => { const { buf, meta } = await adapter.encryptBuffer(Buffer.from('test'), key); - await expect( - Promise.resolve().then(() => adapter.decryptBuffer(buf, 'not-a-buffer', meta)), - ).rejects.toThrow(CasError); - - try { - await Promise.resolve().then(() => adapter.decryptBuffer(buf, 'not-a-buffer', meta)); - } catch (err) { - expect(err.code).toBe('INVALID_KEY_TYPE'); - } + await expect( + Promise.resolve().then(() => adapter.decryptBuffer(buf, 'not-a-buffer', meta)), + ).rejects.toMatchObject({ code: 'INVALID_KEY_TYPE' }); }); it('decryptBuffer rejects INVALID_KEY_LENGTH for 16-byte key', async () => { const shortKey = Buffer.alloc(16, 0xcc); const { buf, meta } = await adapter.encryptBuffer(Buffer.from('test'), key); - await expect( - Promise.resolve().then(() => adapter.decryptBuffer(buf, shortKey, meta)), - ).rejects.toThrow(CasError); - - try { - await Promise.resolve().then(() => adapter.decryptBuffer(buf, shortKey, meta)); - } catch (err) { - expect(err.code).toBe('INVALID_KEY_LENGTH'); - } + await expect( + Promise.resolve().then(() => adapter.decryptBuffer(buf, shortKey, meta)), + ).rejects.toMatchObject({ code: 'INVALID_KEY_LENGTH' }); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/infrastructure/adapters/CryptoAdapter.conformance.test.js` around lines 35 - 60, Replace the two-step rejection checks in the 'decryptBuffer rejects INVALID_KEY_TYPE for string key' and 'decryptBuffer rejects INVALID_KEY_LENGTH for 16-byte key' tests with a single consolidated assertion: call adapter.decryptBuffer via Promise.resolve().then(...) and use await expect(...).rejects.toMatchObject({ code: 'INVALID_KEY_TYPE' }) (and { code: 'INVALID_KEY_LENGTH' } for the other test) instead of the separate expect(...).rejects.toThrow(CasError) + try/catch; keep references to adapter.decryptBuffer, CasError, and the key/shortKey variables so the tests still validate the correct error code.
62-71: Same consolidation opportunity for synchronous throw assertion.The try/catch after
expect(() => finalize()).toThrow(CasError)is also redundant.♻️ Proposed refactor
it('createEncryptionStream.finalize() throws STREAM_NOT_CONSUMED before consumption', () => { const { finalize } = adapter.createEncryptionStream(key); - expect(() => finalize()).toThrow(CasError); - - try { - finalize(); - } catch (err) { - expect(err.code).toBe('STREAM_NOT_CONSUMED'); - } + expect(() => finalize()).toThrow( + expect.objectContaining({ code: 'STREAM_NOT_CONSUMED' }), + ); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/infrastructure/adapters/CryptoAdapter.conformance.test.js` around lines 62 - 71, The test redundantly calls finalize() twice and wraps the second call in a try/catch after already asserting expect(() => finalize()).toThrow(CasError); instead capture the thrown error once and assert both its type and code in one flow: call adapter.createEncryptionStream(key).finalize via a sync wrapper that returns the caught error (e.g., const err = (() => { try { finalize(); } catch (e) { return e } })()), then assert err is instance of CasError and err.code === 'STREAM_NOT_CONSUMED'; reference createEncryptionStream and finalize to locate the test and remove the duplicate expect/try-catch.index.js (1)
56-75: DocumentmaxRestoreBufferSizein constructor JSDoc and#configtype.Line 68 adds
maxRestoreBufferSize, but the constructor JSDoc and Line 74 type annotation don’t include it.Proposed fix
/** * `@param` {Object} options @@ * `@param` {import('./src/ports/ChunkingPort.js').default} [options.chunker] - Pre-built ChunkingPort instance (advanced). + * `@param` {number} [options.maxRestoreBufferSize] - Max buffered restore size in bytes for encrypted/compressed restores. */ @@ - /** `@type` {{ plumbing: *, chunkSize?: number, codec?: *, policy?: *, crypto?: *, observability?: *, merkleThreshold?: number, concurrency?: number, chunking?: *, chunker?: * }} */ + /** `@type` {{ plumbing: *, chunkSize?: number, codec?: *, policy?: *, crypto?: *, observability?: *, merkleThreshold?: number, concurrency?: number, chunking?: *, chunker?: *, maxRestoreBufferSize?: number }} */ `#config`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@index.js` around lines 56 - 75, The constructor now accepts maxRestoreBufferSize but the JSDoc param list and the inline `#config` type comment are missing it; update the constructor JSDoc to add a `@param` entry for maxRestoreBufferSize (e.g., number and optional, with a short description) and extend the `#config` type annotation comment to include maxRestoreBufferSize?: number so the documented type matches the actual constructor arguments (refer to the constructor signature and the `#config` comment to locate where to add these).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/git-cas.js`:
- Around line 209-211: The conflict check inside the action handler currently
calls resolvePassphrase() which may prompt/consume stdin; change the validation
in the runAction(...) handler so it only inspects opts flags (e.g.,
opts.recipient, opts.keyFile, opts.vaultPassphrase flag) without awaiting
resolvePassphrase(), and defer calling await resolvePassphrase(opts) until after
validation and when you actually need the passphrase for encryption setup;
update the error message logic in the same anonymous action function so it
throws when both recipient and a key/passphrase flag are present, but does not
invoke resolvePassphrase() during that check.
In `@bin/ui/passphrase-prompt.js`:
- Around line 40-43: Normalize Windows CRLF by replacing the current regex that
only strips '\n' with one that also removes a preceding '\r' — update both
places where you call .replace(/\n$/, '') (the
Buffer.concat(chunks).toString('utf8').replace(...) and the content.replace(...)
after readFile(filePath, 'utf8')) to use .replace(/\r?\n$/, '') (or equivalent
trim of trailing CR/LF) so a trailing '\r' from CRLF is not left in the
passphrase.
In `@ROADMAP.md`:
- Line 281: The heading "### 16.1 — Crypto Adapter Behavioral Normalization
*(P0)* — C8" jumps from a top-level "#" section and should use the next
incremental level; change the leading "###" to "##" (or match the appropriate
level consistent with the surrounding M16 section headers) so the heading levels
are properly incremental and MD001 is resolved.
In `@src/domain/services/CasService.js`:
- Around line 38-75: The constructor currently accepts maxRestoreBufferSize
without validation; add validation to ensure it's an integer within safe bounds
(e.g., >= 1024 bytes and <= a sensible cap such as 2 * 1024 * 1024 * 1024) and
throw a clear Error for invalid values. Implement this by extending the existing
static `#validateConstructorArgs` to accept maxRestoreBufferSize (or add a new
private static method like `#validateRestoreBufferSize`) and check
Number.isInteger(maxRestoreBufferSize) && maxRestoreBufferSize >= 1024 &&
maxRestoreBufferSize <= MAX_RESTORE_BUFFER_SIZE (define MAX_RESTORE_BUFFER_SIZE
constant), then update the constructor to call the validator before assigning
this.maxRestoreBufferSize and surface a descriptive error message on failure.
In `@src/infrastructure/adapters/WebCryptoAdapter.js`:
- Around line 19-22: The constructor of WebCryptoAdapter currently assigns the
maxEncryptionBufferSize directly to the private field (`#maxEncryptionBufferSize`)
without validation; add input validation in the constructor to ensure the
provided maxEncryptionBufferSize is a finite, positive integer (e.g.,
Number.isFinite and Number.isInteger and > 0) and not Infinity/NaN/0/negative;
if the value is invalid, either throw a TypeError or fall back to the safe
default (512 * 1024 * 1024) before assigning to this.#maxEncryptionBufferSize so
the guard cannot be bypassed.
In `@src/infrastructure/chunkers/FixedChunker.js`:
- Around line 20-25: The constructor of FixedChunker must validate that the
provided chunkSize is a positive finite integer and not zero; update the
FixedChunker constructor (where `#chunkSize` is set) to throw a RangeError if
chunkSize is not a number, is NaN, is not finite, is less than 1, or is not an
integer (in addition to the existing upper-bound check), so the class never
accepts 0 or invalid numeric values that would produce empty chunks.
In `@test/unit/cli/actions.test.js`:
- Around line 126-143: The timing assertions in the tests ("delays ~1s on
INTEGRITY_ERROR before writing output" and "no delay for non-INTEGRITY_ERROR
codes") are flaky because they use Date.now() to measure wall-clock time; change
the tests to use Vitest's fake timers (vi.useFakeTimers(),
vi.advanceTimersByTimeAsync()) to deterministically advance timers around the
runAction(...) invocation and assert elapsed behavior without real time;
specifically, replace Date.now() measurements in those two it blocks and drive
the setTimeout-based delay via vi.advanceTimersByTimeAsync(1000) (or appropriate
values) and restore timers after each test, but first validate that
vi.useFakeTimers()/advanceTimersByTimeAsync work reliably on Node.js 22.x, Bun,
and Deno before committing this refactor.
In `@test/unit/domain/services/CasService.kdfBruteForce.test.js`:
- Around line 94-102: The test currently only measures elapsed time when calling
service.restore({ manifest, encryptionKey: wrongKey }) but doesn't assert that
decryption actually failed; modify the test to capture the thrown error (e.g.,
assign the caught error in the catch block or use await
expect(service.restore(...)).rejects.toEqual(INTEGRITY_ERROR)) and assert it
equals INTEGRITY_ERROR (reference the INTEGRITY_ERROR constant), ensuring the
test fails if no error is thrown, while keeping the existing timing assertion
that elapsed is less than 500ms.
In `@test/unit/domain/services/CasService.orphanedBlobs.test.js`:
- Around line 55-81: The three tests invoking service.store with failingSource
currently use try/catch but lack a guarantee the call throws, so add an explicit
failure if service.store resolves; either convert each to use Jest's rejection
matchers (e.g., await expect(service.store({ source: failingSource(n), slug:
'fail', filename: 'f.bin' })).rejects.toHaveProperty('code', 'STREAM_ERROR') and
then assert on err.meta.orphanedBlobs) or keep the try/catch and insert a fail()
(or throw new Error('Expected error')) immediately after the await
service.store(...) call so the test fails when no error is thrown; reference the
service.store call and failingSource helper when applying the change.
In `@test/unit/domain/services/CasService.restoreGuard.test.js`:
- Around line 98-109: The test "error meta includes size and limit" currently
only asserts inside the catch block so it can pass silently if no error is
thrown; after invoking service.restoreStream(...).next() (the call to
restoreStream on the service in this test), add an explicit failure assertion
immediately after the awaited call (e.g., fail('expected RESTORE_TOO_LARGE
error') or throw new Error) so the test fails when no error is thrown, keeping
the existing catch block assertions for err.code and err.meta unchanged.
---
Outside diff comments:
In `@bin/git-cas.js`:
- Around line 112-125: The function resolveEncryptionKey currently calls
resolvePassphrase before checking vault metadata which can prompt the user
unnecessarily; change the flow in resolveEncryptionKey so it first returns
readKeyFile(opts.keyFile) if provided, then calls cas.getVaultMetadata() and
only if metadata?.encryption is truthy call resolvePassphrase and then
deriveVaultKey(cas, metadata, passphrase); if metadata shows no encryption,
write the warning and return undefined without prompting. Ensure you still
handle the case where passphrase resolution returns falsy by returning
undefined.
In `@src/domain/services/CasService.js`:
- Around line 500-525: The current size check uses pre-decompression chunk sizes
(totalSize) but decompression in _decompress can inflate the payload beyond this
limit; update the restore flow to enforce the maxRestoreBufferSize after
decompression (and/or during streaming decompression): after Buffer.concat(await
this._readAndVerifyChunks(manifest.chunks)) and after calling
this._decompress(buffer), verify buffer.length <= this.maxRestoreBufferSize and
throw a CasError ('RESTORE_TOO_LARGE', include size and limit) if it exceeds;
for a long-term fix replace _decompress with a bounded streaming decompressor
that rejects/aborts when output exceeds this.maxRestoreBufferSize to avoid
unbounded memory allocation.
---
Nitpick comments:
In `@index.js`:
- Around line 56-75: The constructor now accepts maxRestoreBufferSize but the
JSDoc param list and the inline `#config` type comment are missing it; update the
constructor JSDoc to add a `@param` entry for maxRestoreBufferSize (e.g., number
and optional, with a short description) and extend the `#config` type annotation
comment to include maxRestoreBufferSize?: number so the documented type matches
the actual constructor arguments (refer to the constructor signature and the
`#config` comment to locate where to add these).
In `@src/infrastructure/adapters/NodeCryptoAdapter.js`:
- Around line 28-44: The JSDoc for encryptBuffer documents a synchronous return
but the function is async; update the `@returns` annotation for the encryptBuffer
method to reflect it returns a Promise of the object (e.g., Promise<{ buf:
Buffer, meta: import('../../ports/CryptoPort.js').EncryptionMeta }>) so the
types match the async implementation in encryptBuffer and consumers get correct
hints.
In `@test/unit/infrastructure/adapters/CryptoAdapter.conformance.test.js`:
- Around line 35-60: Replace the two-step rejection checks in the 'decryptBuffer
rejects INVALID_KEY_TYPE for string key' and 'decryptBuffer rejects
INVALID_KEY_LENGTH for 16-byte key' tests with a single consolidated assertion:
call adapter.decryptBuffer via Promise.resolve().then(...) and use await
expect(...).rejects.toMatchObject({ code: 'INVALID_KEY_TYPE' }) (and { code:
'INVALID_KEY_LENGTH' } for the other test) instead of the separate
expect(...).rejects.toThrow(CasError) + try/catch; keep references to
adapter.decryptBuffer, CasError, and the key/shortKey variables so the tests
still validate the correct error code.
- Around line 62-71: The test redundantly calls finalize() twice and wraps the
second call in a try/catch after already asserting expect(() =>
finalize()).toThrow(CasError); instead capture the thrown error once and assert
both its type and code in one flow: call
adapter.createEncryptionStream(key).finalize via a sync wrapper that returns the
caught error (e.g., const err = (() => { try { finalize(); } catch (e) { return
e } })()), then assert err is instance of CasError and err.code ===
'STREAM_NOT_CONSUMED'; reference createEncryptionStream and finalize to locate
the test and remove the duplicate expect/try-catch.
ℹ️ Review info
Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 315149ad-ba54-4dd7-9a41-511654d0c827
📒 Files selected for processing (35)
CHANGELOG.mdCODE-EVAL.mdREADME.mdROADMAP.mdSECURITY.mdbin/actions.jsbin/git-cas.jsbin/ui/passphrase-prompt.jsindex.d.tsindex.jsscripts/hooks/pre-commitscripts/hooks/pre-pushscripts/install-hooks.shsrc/domain/errors/CasError.jssrc/domain/services/CasService.d.tssrc/domain/services/CasService.jssrc/domain/services/VaultService.jssrc/infrastructure/adapters/NodeCryptoAdapter.jssrc/infrastructure/adapters/WebCryptoAdapter.jssrc/infrastructure/chunkers/CdcChunker.jssrc/infrastructure/chunkers/FixedChunker.jstest/unit/cli/actions.test.jstest/unit/cli/passphrase-prompt.test.jstest/unit/domain/errors/CasError.test.jstest/unit/domain/services/CasService.chunkSizeBound.test.jstest/unit/domain/services/CasService.dedupWarning.test.jstest/unit/domain/services/CasService.kdfBruteForce.test.jstest/unit/domain/services/CasService.lifecycle.test.jstest/unit/domain/services/CasService.orphanedBlobs.test.jstest/unit/domain/services/CasService.restoreGuard.test.jstest/unit/domain/services/VaultService.encryptionCount.test.jstest/unit/infrastructure/adapters/CryptoAdapter.conformance.test.jstest/unit/infrastructure/adapters/WebCryptoAdapter.bufferGuard.test.jstest/unit/infrastructure/chunkers/ChunkerBounds.test.jstest/unit/infrastructure/chunkers/FixedChunker.test.js
| it('delays ~1s on INTEGRITY_ERROR before writing output', async () => { | ||
| const err = Object.assign(new Error('bad key'), { code: 'INTEGRITY_ERROR' }); | ||
| const action = runAction(async () => { throw err; }, () => false); | ||
| const start = Date.now(); | ||
| await action(); | ||
| const elapsed = Date.now() - start; | ||
| expect(elapsed).toBeGreaterThanOrEqual(900); | ||
| expect(process.exitCode).toBe(1); | ||
| }); | ||
|
|
||
| it('no delay for non-INTEGRITY_ERROR codes', async () => { | ||
| const err = Object.assign(new Error('gone'), { code: 'MISSING_KEY' }); | ||
| const action = runAction(async () => { throw err; }, () => false); | ||
| const start = Date.now(); | ||
| await action(); | ||
| const elapsed = Date.now() - start; | ||
| expect(elapsed).toBeLessThan(200); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find and read the test file
find . -name "actions.test.js" -path "*/test/unit/*"Repository: git-stunts/git-cas
Length of output: 93
🏁 Script executed:
# Check package.json for Vitest version
cd . && cat package.json | grep -A 5 '"vitest"'Repository: git-stunts/git-cas
Length of output: 90
🌐 Web query:
Vitest 2.1.8 useFakeTimers advanceTimersByTimeAsync Node Bun Deno runtime compatibility
💡 Result:
What vi.useFakeTimers() / vi.advanceTimersByTimeAsync() depend on (Vitest 2.1.8)
- Vitest’s fake timers are implemented by passing options through to
@sinonjs/fake-timers. [1] @sinonjs/fake-timersprovides different shims depending on the runtime surface:- Node-specific:
process.nextTick+process.hrtimeshims (called out as Node-only). [2] - Browser-like:
performance.now()shim (browser-only). [2]
Sovi.advanceTimersByTimeAsync()is as compatible as the underlying runtime is with the timer/clock APIs that Sinon expects.
- Node-specific:
Runtime compatibility (Node vs Bun vs Deno)
Node.js
- This is Vitest’s primary/assumed runtime. The Node-only pieces that Sinon can hook (
nextTick,hrtime) exist here, souseFakeTimers+advanceTimersByTimeAsyncare the most reliable on Node. [1][2]
Bun
- If you run
bun test, you are not running Vitest—Bun injects its own Jest-like runner, so Vitest-specific APIs/features can be missing or mismatched. Usevitest runinstead (e.g. viabun run <script>), notbun test. [4] - Bun’s own testing docs note that (in Bun’s runner) timers aren’t impacted yet (it focuses on
Date/system time), which is a separate concern from Vitest but often the source of confusion when mixing tools. [3] - Even when running Vitest under Bun’s Node-compat layer, Vitest can hit
worker_threads/ pooling gaps (seen as warnings / instability in real setups), which can indirectly affect timer-heavy async tests. [6]
Deno
- Running Vitest under Deno depends on Deno’s Node/npm compatibility (especially
node:vmandnode:worker_threads, which test runners use heavily). Deno has specifically shipped fixes for issues that broke Vitest under Node-compat. [5] - Practically: it may work on some Deno versions/configs, but it’s not as “drop-in guaranteed” as Node for
useFakeTimers+ async timer advancement.
Practical guidance
- For the highest confidence that
vi.advanceTimersByTimeAsync()behaves correctly (especially around microtasks + Node-only shims), run Vitest on Node.js. [1][2] - On Bun: don’t use
bun testfor Vitest suites; run the Vitest CLI instead. [4] - On Deno: validate against your exact Deno version—Vitest viability is tied to Deno’s evolving Node-compat implementation. [5]
Sources:
[1] Vitest fakeTimers config (uses @sinonjs/fake-timers) (vitest.dev)
[2] @sinonjs/fake-timers runtime notes (nextTick/hrtime Node-only, performance browser-only) (github.com)
[3] Bun testing docs: timers not impacted yet (Bun runner) (bun.com)
[4] StackOverflow: bun test runs Bun’s framework, not Vitest (stackoverflow.com)
[5] Deno v1.43 blog: Node-compat fixes that affected vitest (deno.com)
[6] Example of Vitest-on-Bun friction around worker_threads (gitlab.com)
Citations:
- 1: https://vitest.dev/config/faketimers?utm_source=openai
- 2: https://github.com/sinonjs/fake-timers?utm_source=openai
- 3: https://bun.com/docs/test/dates-times?utm_source=openai
- 4: https://stackoverflow.com/questions/79389157/typescript-complains-that-vitests-ontestfailed-function-is-undefined-despite-of?utm_source=openai
- 5: https://deno.com/blog/v1.43?utm_source=openai
- 6: https://gitlab.com/ss-open/websites/personal/-/issues/9?utm_source=openai
🏁 Script executed:
# Read the actual test file to see the implementation
cat -n test/unit/cli/actions.test.js | sed -n '110,150p'Repository: git-stunts/git-cas
Length of output: 1713
🏁 Script executed:
# Find runAction implementation
rg -n "function runAction|const runAction|export.*runAction" test/unit/cli/actions.test.js -B 2 -A 10Repository: git-stunts/git-cas
Length of output: 44
🏁 Script executed:
# Also look for the main actions.js file that likely contains runAction
find . -path "*/cli/actions.js" -not -path "*/test/*" -not -path "*/node_modules/*"Repository: git-stunts/git-cas
Length of output: 44
🏁 Script executed:
# Look for imports at the top of the test file
head -20 test/unit/cli/actions.test.jsRepository: git-stunts/git-cas
Length of output: 698
🏁 Script executed:
# Search for runAction definition across the codebase
rg "runAction" --type js --type ts -g "!node_modules" -g "!test"Repository: git-stunts/git-cas
Length of output: 1750
🏁 Script executed:
# Read the runAction implementation
cat -n bin/actions.js | head -100Repository: git-stunts/git-cas
Length of output: 3583
🏁 Script executed:
# Look specifically for the INTEGRITY_ERROR handling
rg -n "INTEGRITY_ERROR" bin/actions.js -B 5 -A 5Repository: git-stunts/git-cas
Length of output: 1200
Wall-clock timing assertions are flaky and will fail nondeterministically under CI load, but the suggested fake timers refactor must be validated across all supported runtimes before implementation.
Lines 132 (>=900) and 142 (<200) depend on real elapsed time via Date.now(). Under load—especially on Bun and Deno—these assertions fail even when behavior is correct.
The suggested refactor using vi.useFakeTimers() and vi.advanceTimersByTimeAsync() is theoretically sound (the implementation does use setTimeout), but Vitest's fake timers rely on @sinonjs/fake-timers, which has uneven support across runtimes:
- Node.js: Works reliably (has
process.nextTickandprocess.hrtimeshims). - Bun: Has documented worker threads and pooling gaps that can destabilize async timer advancement.
- Deno: Node-compat support is evolving and not guaranteed.
Per coding guidelines, tests must pass on Node.js 22.x, Bun, and Deno. Before implementing the suggested refactor, validate it against all three runtimes to confirm fake timers don't introduce new failures.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/unit/cli/actions.test.js` around lines 126 - 143, The timing assertions
in the tests ("delays ~1s on INTEGRITY_ERROR before writing output" and "no
delay for non-INTEGRITY_ERROR codes") are flaky because they use Date.now() to
measure wall-clock time; change the tests to use Vitest's fake timers
(vi.useFakeTimers(), vi.advanceTimersByTimeAsync()) to deterministically advance
timers around the runAction(...) invocation and assert elapsed behavior without
real time; specifically, replace Date.now() measurements in those two it blocks
and drive the setTimeout-based delay via vi.advanceTimersByTimeAsync(1000) (or
appropriate values) and restore timers after each test, but first validate that
vi.useFakeTimers()/advanceTimersByTimeAsync work reliably on Node.js 22.x, Bun,
and Deno before committing this refactor.
VaultService.test.js now passes a mockObservability() port (vi.fn() mocks) to all 46 tests instead of relying on the silent no-op default. rotateVaultPassphrase.test.js now passes SilentObserver explicitly. If observability wiring breaks, the test suite will catch it.
- resolveEncryptionKey now checks vault metadata before calling resolvePassphrase, avoiding unnecessary TTY prompts for unencrypted vaults. - Store action recipient-conflict check uses hasPassphraseSource() (flag/env inspection) instead of awaiting resolvePassphrase(), which could consume stdin as a side effect. - readPassphraseFile strips trailing CRLF (\r\n) in addition to LF, preventing mismatched passphrases from Windows-edited files.
- CasService: validate maxRestoreBufferSize (integer >= 1024). #validateConstructorArgs now accepts an object to stay within max-params lint rule. - WebCryptoAdapter: validate maxEncryptionBufferSize (finite, positive). - FixedChunker: validate chunkSize lower bound (positive integer). Prevents infinite loops from chunkSize=0 or NaN.
_restoreBuffered already checked pre-decompression chunk sizes against maxRestoreBufferSize, but decompression can inflate far beyond that bound. A second check after _decompress now throws RESTORE_TOO_LARGE when the decompressed buffer exceeds the configured limit.
- orphanedBlobs: add expect.unreachable after store() in 3 error tests. - restoreGuard: add expect.unreachable in size/limit meta test. - kdfBruteForce: assert INTEGRITY_ERROR code, not just timing. - conformance: consolidate double try/catch into toMatchObject / objectContaining single assertions; remove unused CasError import.
…ding level - NodeCryptoAdapter.encryptBuffer @returns wrapped in Promise<>. - index.js constructor JSDoc and #config type include maxRestoreBufferSize. - ROADMAP.md: add ## Task Cards heading to fix MD001 heading-level jump.
PR Feedback — Round 1 FixesAll 16 CodeRabbit findings addressed in 6 commits: Major fixes
Nit fixes
Not changed:
|
|
@coderabbitai review please 🥕 |
|
✅ Actions performedReview triggered.
|
Summary
Implements all 13 task cards from Milestone 16 (Capstone) — a full audit remediation pass addressing 9 architectural flaws and 10 concerns identified in CODE-EVAL.md.
Safety Nets (Phase 1)
CasErrorportability guard (Error.captureStackTrace)scripts/hooks/renameencryptBuffer,STREAM_NOT_CONSUMEDguards)maxRestoreBufferSize,RESTORE_TOO_LARGE)STREAM_ERROR(meta.orphanedBlobs)Correctness (Phase 2)
--vault-passphrase-file, TTY prompt)maxEncryptionBufferSize)Polish (Phase 3)
inspectAsset,collectReferencedChunks+ deprecated aliases)decryption_failedmetric, 1s CLI delay)encryptionCountin vault metadataBacklog
encryptionCountauto-rotationStats
Test plan
npx eslint .— 0 errorsnpm test— 895 tests pass (Node)docker compose run --build --rm test-bun bunx vitest run test/unit— Bun unit testsdocker compose run --build --rm test-deno deno run -A npm:vitest run test/unit— Deno unit testsnpm pack --dry-run— cleannpx jsr publish --dry-run --allow-dirty— cleanSummary by CodeRabbit
Release Notes
New Features
maxRestoreBufferSize.Bug Fixes
Deprecations
deleteAsset()andfindOrphanedChunks(); useinspectAsset()andcollectReferencedChunks()instead.