feat(cli): add hyperframes lambda deploy/render/progress/destroy#910
Conversation
miguel-heygen
left a comment
There was a problem hiding this comment.
Review: feat(cli): add hyperframes lambda deploy/render/progress/destroy
Approve — well-structured CLI surface for the Lambda deployment flow.
What I verified
- Command structure: Follows existing CLI patterns (citty
defineCommand, positional args,_examples.tsexports). Subcommands are properly routed via switch. - Security: All subprocess calls use
spawnSync/execFileSyncwith array args — no shell injection risk. SAM deploy params built as array, not string interpolation. AWS profile passed via--profileflag, not env manipulation. - SAM integration:
assertSamAvailable()gate before deploy.locateSamTemplate()resolves from repo root.fetchStackOutputs()parses CFN outputs correctly. - State management: Local
.hyperframes-lambda.jsonstate file for stack name/region persistence between commands.state.test.tscovers read/write/default paths. - Render flow: Proper polling loop with configurable interval, timeout handling, and
--waitmode that blocks until complete. - Sites subcommand: Clean S3 upload with content-type detection and zip/compile/upload pipeline.
- Error handling: Each subcommand has specific error messages with actionable hints.
CI note
6 perf test failures (drift, fps, load, parity, scrub, player-perf) — these look like base-branch version bump conflicts, not issues with the CLI code itself. The CLI code doesn't touch engine/player paths.
Non-blocking observations
repoRoot.tswalks up from__dirnamelooking forpackage.jsonwithworkspaces— this only works when running from the monorepo checkout. Worth a comment noting this won't work from a globally-installedhyperframesCLI.- The
--chunk-countdefault of 4 is hardcoded inrender.tsargs — consider making this configurable via the state file or environment.
vanceingalls
left a comment
There was a problem hiding this comment.
One-line summary: solid CLI scaffold and a real UX win, but several flags are silently dropped between dispatcher and SDK call, and --memory doesn't actually parameterize the deploy — that's a correctness bug, not a polish item.
Audited: packages/cli/src/commands/lambda.ts, lambda/{deploy,destroy,render,progress,sites,sam,repoRoot,state,state.test}.ts, packages/aws-lambda/src/sdk/index.ts, docs/packages/cli.mdx, template.yaml Parameters + Outputs.
Trusting: the SDK internals (renderToLambda, deploySite, getRenderProgress) — read integration points only.
Strengths
state.ts:35-46round-trips through disk with a thin enough abstraction that the test covers it well. Malformed-JSON returnsnullinstead of throwing (state.ts:55) — that's the right call for a local cache file.- Lazy
import("./lambda/<verb>.js")in the dispatcher (lambda.ts:128, 141, 161, 196, 208) keeps the CLI's hot-path startup cost off the Lambda subcommands. Good instinct. requireStack(state.ts:74-90) is exactly the right shared error surface — single hint message, lists known stacks, exits 1.
Blockers
-
--memoryis silently dropped ondeploy.lambda.ts:147parses--memoryanddeploy.ts:43, 75records it in the state file aslambdaMemoryMb. ButsamDeploy(sam.ts:78-94) only forwardsChromeSource+ReservedConcurrency—LambdaMemoryMbis never in--parameter-overrides. The SAM template (template.yaml:21-28) acceptsLambdaMemoryMb, but it'll always resolve to the template default (10240). Worse: cost math inprogress.ts:32(defaultMemorySizeMb: stack.lambdaMemoryMb) reads the recorded value, not the actual deployed value, so--memory 5120produces wrong cost numbers downstream. Either forward the parameter to SAM, or remove the flag from--helpuntil it works. -
--profileis silently dropped onrenderandprogress.lambda.ts:122acceptsprofile.runDeployandrunDestroyconsume it. ButrunRender(lambda.ts:154-191) andrunProgress(lambda.ts:199-204) don't take or forward anawsProfile. The SDK calls under them will fall back to the default credentials chain — a user with--profile prodwill silently render against their default account. This is the most dangerous flavor of dropped flag (wrong-account billing). Either threadawsProfileinto the SDK calls or strip--profilefrom the help text and document it as deploy/destroy-only.
Important
-
render --wait --jsonemits two concatenated JSON blobs.render.ts:74-82prints the handle as JSON, thenwaitForCompletionprints another full JSON snapshot on terminal state (render.ts:114). Result is not a valid single JSON document —jqwill read the first and treat the rest as trailing garbage on strict parsers. Either NDJSON it explicitly (and document the format) or emit only the final progress snapshot in--json --waitmode. -
destroydoesn't check for in-flight renders beforesam delete.destroy.ts:21straight-shells tosam delete --no-prompts. A Step Functions execution in RUNNING state isn't a stack-deletion blocker — CloudFormation will tear down the state machine while executions are still marching, aborting them mid-render. At minimum: listexecutions --status-filter RUNNINGvia the SDK and either warn (and require--force) or wait. The PR description's spec for this verb actually calls this out as a failure mode. -
runDestroy.awsProfileignoresAWS_PROFILE.deploy.ts:42falls back toprocess.env.AWS_PROFILE, butdestroy.ts:25only takes the explicit flag (lambda.ts:208passesargs.profilewithout env fallback). SameAWS_PROFILE=foo bash -c "hyperframes lambda deploy && hyperframes lambda destroy"will use the env profile for deploy and the default chain for destroy. Mirror the env fallback. -
width/heightaccept negative integers.lambda.ts:155-164:parseIntFlag(args.width)returns-100for--width=-100, the!widthtruthiness check accepts it (since-100is truthy), and the value flows into the SDK config. Validation should bewidth > 0 && Number.isInteger(width). Same forchunk-size,max-parallel-chunks,memory,concurrency.parseIntFlagis the right place to bound these. -
Faked
SiteHandlewhen--site-idis passed is contract-fragile.render.ts:43-50constructs{siteId, projectS3Uri, bytes: 0, uploadedAt: "", uploaded: false}and hands it torenderToLambda. This is only correct if the SDK treatssiteHandleas a pure-by-id lookup and ignores the bag's other fields whenuploaded: false. If the SDK ever readsbytesoruploadedAt(logging, telemetry, validation, idempotency), this silently corrupts those fields. Two cleaner options: (a)renderToLambdashould accept{siteId: string}as a discriminated variant and resolve the rest viaHeadObject; (b) the CLI should calldeploySite({siteId})for the resolve-only path. The current "lie convincingly to the SDK" shape isn't durable. -
DEFAULT_STACK_NAMEis referenced twice with different resolved values.state.ts:24definesDEFAULT_STACK_NAME = "default".deploy.ts:38builds the default as`hyperframes-${DEFAULT_STACK_NAME}`→"hyperframes-default".lambda.ts:117hardcodes"hyperframes-default"again.requireStack(state.ts:81) compares against the literal"default"for hint-formatting, so the hint always emits--stack-name=hyperframes-defaulteven when the user is in the default state. Pick one default string and centralize. The duplication will rot. -
Installed-package users get a confusing failure.
repoRoot.ts:14-26walks up looking forpackages/aws-lambda/package.json;locateSamTemplate(sam.ts:35-43) looks forexamples/aws-lambda/template.yaml. Neither exists outside a hyperframes checkout. The PR docs (cli.mdx:99-100) only list SAM CLI + bun + AWS creds as prerequisites — there's no mention that you also need a hyperframes source checkout. Either ship the template inside the package (and updatelocateSamTemplateto resolve from__dirname), or update the docs and--helpto say "run from a hyperframes checkout." -
Unit-test coverage is thin.
state.test.tscovers only the state-file round-trip. No coverage for: dispatcher subcommand routing, the enum parsers (parseFormat/parseCodec/parseQuality/parseChromeSource),parseIntFlag,executionArnFromName, or the--jsonmodes. The PR description punts integration tests to PR 6.6, which is fine, but the pure-function surface here is testable today and the bugs above (silent flag drops, negative width,--memorynot plumbed, double-JSON) would all be caught by unit tests on the dispatcher.
Nits
sam.ts:118-122:functionName.split(":").pop()extracts the last ARN segment. Works forfunction:nameARNs, breaks for qualified-version ARNs (function:name:1). SAM doesn't qualify by default so it's not a today-bug, but a regex match against:function:([^:]+)(:|$)would be more honest.deploy.ts:103-113: noassertBunAvailable()parallel toassertSamAvailable/assertAwsCliAvailable. A missingbunerrors withspawn bun ENOENTinstead of the nice hint the other tools get.progress.ts:55:progress.outputFile.bytes ?? "?"— the human format mixes?and numbers in abytesfield. A smallhumanBytes()helper would read better.lambda.ts:233-237: theparseEnum<T>helper throws on bad input; the dispatcher's other validation paths useconsole.error + process.exit(1). Either flow is fine but the inconsistency is jarring — a thrown error will print a stack trace, theexit(1)path won't.
Notes
- CI:
player-perfandPerf: driftare failing on this PR (https://github.com/heygen-com/hyperframes/actions/runs/25976583026/job/76357695595). These are optional checks (mergeable_state: UNSTABLE, notBLOCKED); not gating this verdict. Worth a quick rerun to confirm they're the usual flakes and not a regression from this PR. - The
feat-lambda-sdk-cdk-constructbase is the stack target, notmain. Confirmed viabaseRefNameon the PR; verdict is on the diff vs. that base.
Verdict: REQUEST CHANGES
Reasoning: Two flags (--memory, --profile) silently drop between dispatcher and AWS calls — --memory makes cost accounting wrong, --profile is a wrong-account-billing footgun. Both fixes are small. The --json --wait double-blob, the destroy-while-rendering hazard, and the faked-SiteHandle shape are next on the list. Architecture and abstractions are right; landing zone is the contract between dispatcher flags and SDK calls.
Review by Vai
d49753e to
30deb7b
Compare
miguel-heygen
left a comment
There was a problem hiding this comment.
Both blockers from the previous review are addressed:
1. --memory flag silently dropped — Fixed. The memory arg is parsed via parsePositiveInt(args.memory, "--memory") in the lambda.ts command dispatcher (line 385) and forwarded to runDeploy as lambdaMemoryMb. In deploy.ts, it flows through resolved.lambdaMemoryMb into samDeploy(). In sam.ts, samDeploy() explicitly pushes LambdaMemoryMb=${opts.lambdaMemoryMb} into paramOverrides (line 1077) which are spread into --parameter-overrides. The full chain is intact: CLI flag -> DeployArgs -> samDeploy -> SAM parameter-overrides.
2. --profile silently dropped on render + progress — Fixed. The lambda.ts dispatcher sets process.env.AWS_PROFILE = profileFlag globally before the subcommand switch (line 372), so all downstream AWS SDK clients and CLI calls (render, progress, sites, destroy) inherit the profile via the environment. Additionally, samDeploy, samDelete, and fetchStackOutputs all explicitly pass --profile when awsProfile is set.
vanceingalls
left a comment
There was a problem hiding this comment.
Re-review of 30deb7b. Both consumer-drop blockers resolved end-to-end; bundling fix verified by inspecting the built dist/cli.js.
Verified fixes
- Blocker 1 —
--memorythreaded into SAM.sam.ts:86-88now pushesLambdaMemoryMb=${opts.lambdaMemoryMb}into--parameter-overrides,deploy.ts:71forwards from the resolved args,lambda.ts:151parses viaparsePositiveInt. End-to-end:--memory 5120now reaches CloudFormation AND matches the value stored in state forprogress.ts's cost math. - Blocker 2 —
--profilereaches render/progress. Solved at the dispatcher (lambda.ts:137-140) by settingprocess.env.AWS_PROFILEbefore any subverb runs. AWS SDK v3's default provider chain reads the env, so render/progress/sites all benefit without per-subverb plumbing. Cleaner than threading anawsProfilearg through every SDK call. Same treatment for--regionviaAWS_REGION. (Note: SDK clients are still constructed with{ region: stack.region }, which correctly wins over the env so the state file remains the region source-of-truth on render.) --wait --jsonsingle JSON document.render.ts:80-92: withwaitset, onlywaitForCompletion's terminal snapshot is emitted (line 133). Withoutwait, only the handle. The double-blob path is gone —jq -rwill parse cleanly now.- Negative integer rejection.
parsePositiveInt(lambda.ts:254-261) throws onn < 1or non-integer. Applied to--width,--height,--chunk-size,--max-parallel-chunks,--memory,--concurrency,--wait-interval-ms.--fpskeeps the explicit24|30|60allow-list atlambda.ts:194-197. destroyAWS_PROFILE env fallback.destroy.ts:32now matchesdeploy.ts:40:args.awsProfile ?? process.env.AWS_PROFILE.AWS_PROFILE=prod hyperframes lambda destroynow hits the same accountdeploydid.DEFAULT_STACK_NAMEcentralised. Single literal"hyperframes-default"atstate.ts:35; the three`hyperframes-${DEFAULT_STACK_NAME}`template concatenations are gone (grep confirms — only the literal is referenced now).- Bundling fix verified.
tsup.config.tslists@hyperframes/aws-lambda+@hyperframes/aws-lambda/sdkasexternal, with the SDK subpath alias still in place so esbuild's subpath-as-file misfire doesn't bite. Moved todependenciesso installed-package users resolve at runtime. Inspected the builtdist/cli.js(4.69 MB) —@hyperframes/aws-lambda/sdksurvives as an import specifier, none of@aws-sdk/client-sfn,@aws-sdk/client-s3, or thesplitStreamsymbol that was tripping esbuild is inlined.
Open follow-ups (next PR, not blocking)
destroydoesn't check for in-flight SFN executions. Still straight-shellssam delete --no-promptsatdestroy.ts:28. ARUNNINGexecution will get aborted mid-render when CloudFormation tears down the state machine. Same shape as the original finding; commit message correctly scoped this out. Worth aListExecutions --status-filter RUNNING+ warn-or---forcegate in a follow-up. (important)- Installed-package users still hit the repo-checkout assumption.
sam.ts:68resolvesexamples/aws-lambda/template.yamlfromrepoRoot(), which doesn't exist outside a hyperframes checkout. Either ship the template inside@hyperframes/aws-lambdaand resolve from__dirname, or updatecli.mdx:99-100+--helpto require a source checkout. (important) - Unit-test coverage still thin.
state.test.tsis the only test file.parsePositiveInt,parseEnum,executionArnFromName, and the--wait --jsonsingle-document mode would all benefit from one-screen unit tests today — they'd have caught all four importants above. (important) - Faked-
SiteHandleplaceholder fields.render.ts:58-67still hands{bytes: 0, uploadedAt: ""}to the SDK; the contract-fragility concern is unchanged, but it's mitigated by the SDK only readingprojectS3Uriwhenuploaded: false. Cleaner long-term shape would be a{siteId}-only discriminated variant onrenderToLambda. (nitnow thatbucketNameis wired)
Verdict: APPROVE
Reasoning: Both consumer-drop blockers are fixed end-to-end; the dispatcher-level process.env.AWS_PROFILE approach is a cleaner fix than threading the flag per-subverb. The bundle external is verified in the built dist/cli.js. Remaining importants are scope-deferred and the commit message says so honestly. Land it; follow-up issues for the three opens above.
Review by Vai (re-review)
The base branch was changed.
Wraps the @hyperframes/aws-lambda SDK + the Phase 6a SAM template behind
a single CLI surface so an end-to-end render is three commands instead
of the ~8 manual bun+sam+aws steps the smoke script does today:
hyperframes lambda deploy
hyperframes lambda render ./my-project --width 1920 --height 1080 --wait
hyperframes lambda destroy
Subcommands:
- deploy: build handler.zip + sam-deploy + persist stack outputs
to <cwd>/.hyperframes/lambda-stack-<name>.json
- sites create: pre-upload a project to S3 with a stable content hash
so re-renders skip the tar+PUT pass
- render: start a Step Functions execution; --wait blocks and
streams per-chunk progress + accrued cost
- progress: one-shot snapshot — status, frames, cost breakdown,
errors. Accepts renderId or executionArn
- destroy: sam-delete + drop the local state file (S3 bucket
is Retain'd by the template; documented in --help
and in docs/packages/cli.mdx)
To keep @sparticuz/chromium out of the CLI's transitive deps, this also
adds a dedicated ./sdk subpath export to @hyperframes/aws-lambda; the
CLI imports from @hyperframes/aws-lambda/sdk exclusively. The existing
. barrel still re-exports both handler + SDK for adopters who want one
entry point.
Defaults are deliberately cost-conservative for first-time users:
--concurrency=8 (low enough to never surprise) and --memory=10240 (the
common case; documented for adopters who want to tune down).
Tests: 5 unit tests on the state-file round-trip. CLI integration
against sam local invoke is part of the upcoming PR 6.6 (lambda-local
regression harness).
Two small cleanups on top of the lambda CLI:
- Replace parseFormat / parseCodec / parseQuality / parseChromeSource
(four near-identical helpers) with a single generic parseEnum() +
typed const-tuple lookups. The four callers now read as one-line
arrow functions that lift the allowed values out of the function
body so they're easy to extend.
- DEFAULT_STACK_NAME was const-declared then re-exported at the
bottom of state.ts; just mark the const export inline.
No behavior changes. All CLI tests still pass.
esbuild can't bundle @hyperframes/aws-lambda's transitive AWS SDK
deps (@aws-sdk/* + @smithy/*) cleanly into a node binary — the
SDK's .browser.js conditional re-exports break the resolver:
ESM Build failed
No matching export in "splitStream.browser.js" for import
"splitStream" (and ~10 similar errors)
Mark aws-lambda as `external` so esbuild doesn't follow it, and
move it from devDependencies to dependencies so the published CLI
can resolve it from node_modules at runtime. The lambda subverb
files dynamic-import only on `hyperframes lambda *` invocation, so
the CLI cold-start cost is unchanged.
The install-size hit (AWS SDK + @sparticuz/chromium ≈ 200 MiB) is
documented as a v1 tradeoff; a future split into a lambda-sdk-only
subpackage can pare this back.
Two blockers + four important items from Vai's review:
- `--memory` was parsed and recorded in the local state file but
never forwarded to `sam deploy` as a parameter override. Worse,
`progress.ts` then read the *recorded* value for cost math, so
`--memory 5120` produced wrong cost numbers downstream. Thread
`LambdaMemoryMb` through samDeploy's --parameter-overrides.
- `--profile` was only consumed by deploy / destroy. render and
progress fell back to the default credentials chain — a user
with `--profile prod` would silently render against their
default account (wrong-account billing footgun). Set
`process.env.AWS_PROFILE` (and `AWS_REGION`) in the dispatcher
before any subverb runs; the AWS SDK reads them natively, so
render / progress / sites all benefit without each subverb
threading the flag through the SDK call.
- `--profile` + destroy now also reads `process.env.AWS_PROFILE`
as a fallback (matching deploy's existing env fallback).
- `--wait --json` printed both the start handle AND the final
progress snapshot, producing two concatenated JSON blobs that
`jq` rejected. Now emits a single document: handle (without
--wait) OR final progress (with --wait).
- Negative integers on `--width` / `--height` / `--chunk-size` /
`--max-parallel-chunks` / `--memory` / `--concurrency` now fail
loudly via a new `parsePositiveInt` wrapper instead of flowing
into the SDK and producing opaque AWS validation errors mid-
render.
- `DEFAULT_STACK_NAME` is now centralized to the literal
`"hyperframes-default"` and consumed from one place. Previously
the value was assembled as `hyperframes-${"default"}` in three
sites and hardcoded as `"hyperframes-default"` in a fourth.
`requireStack`'s hint now matches the dispatcher's default.
The faked `SiteHandle` for `--site-id` keeps the documented
placeholder fields but also surfaces `bucketName` (from PR 909's
extended SiteHandle interface), matching the SDK contract.
All CLI unit tests + the full bundler build still pass.
30deb7b to
8ff0fc2
Compare
miguel-heygen
left a comment
There was a problem hiding this comment.
Re-approve after rebase onto main. Diff verified unchanged — --memory and --profile flags still forwarded end-to-end through SDK calls.
vanceingalls
left a comment
There was a problem hiding this comment.
Re-approve after rebase onto main. Force-push dismissed my prior --approve (require_last_push_approval: true) — content unchanged, same commits replayed on the new base. All findings from the prior review's resolution still apply.
Re-review by Vai (post-rebase re-stamp)
The "Smoke: global install" CI step packs the CLI via `npm pack` and
installs it globally via `npm install -g <tgz>`. npm doesn't understand
the workspace: protocol, so a runtime `dependencies` entry of
`@hyperframes/aws-lambda: workspace:*` blows up with:
npm error code EUNSUPPORTEDPROTOCOL
npm error Unsupported URL Type "workspace:": workspace:*
(pnpm rewrites workspace:* on publish; npm pack doesn't.)
Three changes to unblock the smoke + keep the published CLI install
small for users who don't deploy to Lambda:
- Move `@hyperframes/aws-lambda` from CLI's `dependencies` back to
`devDependencies`. It's already external in tsup.config.ts; the
bundle references it via runtime resolution only.
- Convert the static `import { … } from "@hyperframes/aws-lambda/sdk"`
in sites.ts / render.ts / progress.ts to `await import()` inside
each function. tsup with `splitting: false` was inlining those
static imports at the top of the bundle, which made Node eagerly
resolve them at CLI startup (MODULE_NOT_FOUND before any lambda
subcommand even runs). Dynamic imports stay dynamic in the bundle.
- Add a friendly missing-module check in the lambda dispatcher.
When a user runs `hyperframes lambda deploy / render / sites /
progress / destroy` without aws-lambda installed, they now see:
@hyperframes/aws-lambda is not installed.
The `hyperframes lambda deploy` command needs it at runtime.
Install it alongside the CLI:
npm install -g @hyperframes/aws-lambda
Verified locally: pack + global install + `hyperframes init --example
blank` now succeeds end-to-end (was the same scenario the CI smoke job
runs).
miguel-heygen
left a comment
There was a problem hiding this comment.
Re-approve on 51556b5. Smoke fix (aws-lambda devDep + dynamic imports) reviewed earlier — clean approach.
vanceingalls
left a comment
There was a problem hiding this comment.
Re-approve after rebase + #910 smoke fix.
#910 adds the CLI smoke fix on top: @hyperframes/aws-lambda moved to devDependencies, dispatcher dynamic-imports @hyperframes/aws-lambda/sdk (lambda.ts:150) with a friendly ERR_MODULE_NOT_FOUND → npm install handler at :152-158. npm pack / npm install now works because there's no workspace:* protocol in published dependencies. Clean fix.
#912/#913/#914/#915 are pure rebases on top — same commits replayed on the new base, content unchanged vs. the last approved round. Findings from the prior review's resolution still apply.
Re-review by Vai (post-smoke-fix re-stamp)

What
Adds
hyperframes lambdato the CLI so an end-to-end Lambda render is three commands instead of the ~8 manualbun+sam+aws s3+aws stepfunctionssteps thatexamples/aws-lambda/scripts/smoke.shdoes today.Subcommands:
deploy— buildspackages/aws-lambda/dist/handler.zipand runssam deployagainst the Phase 6a SAM template. Persists{ bucketName, stateMachineArn, functionName, region, lambdaMemoryMb }to<cwd>/.hyperframes/lambda-stack-<name>.jsonso subsequent verbs don't need to re-derive them.sites create <projectDir>— pre-uploads a project with a content-addressed siteId (wrapsdeploySite). Multiple renders of the same tree share one upload.render <projectDir>— wrapsrenderToLambda. Returns a renderId immediately, or--waitblocks and streams per-chunk progress + accrued cost.progress <renderId | executionArn>— wrapsgetRenderProgress. Accepts a bare renderId (resolved against the stack's state-machine ARN) or a full ARN.destroy—sam delete --no-prompts+ drops the local state file. The render bucket isRetain'd by the template; documented in--help+ cli.mdx.Why
PR #909 (the SDK + CDK) made the Lambda surface programmatic; this PR makes it usable from a terminal. The two surfaces are kept in lockstep by having the CLI consume the SDK directly — no duplicate logic. Per
DISTRIBUTED-RENDERING-PLAN.md§ 11 Phase 6b, this is the "headline UX win" of Phase 6b.How
packages/cli/src/commands/lambda.tsdispatcher +packages/cli/src/commands/lambda/{deploy,sites,render,progress,destroy,sam,state,repoRoot}.ts.packages/cli/src/cli.tssubCommandsand added topackages/cli/src/help.tsGROUPS under a new "Deploy" section.docs/packages/cli.mdxextended with a fullhyperframes lambdasection covering prerequisites, all five subcommands, and state-file semantics.@sparticuz/chromiumout of the CLI's transitive dep graph, this PR also adds a dedicated./sdksubpath export to@hyperframes/aws-lambdaand wires the CLI to import from@hyperframes/aws-lambda/sdkexclusively. The.barrel is unchanged (still re-exports both handler + SDK for adopters who want one entry point).Notable design calls:
deployshells out tosam deployrather than driving the CloudFormation API programmatically. SAM handles rollback-on-failure semantics correctly and re-implementing them in TypeScript would duplicate a non-trivial chunk of the SAM CLI. CDK adopters useHyperframesRenderStackdirectly from their own CDK app.--concurrencydefaults to 8 (low) so first-time users don't get surprise-billed by a runaway Map state. Adopters tune up explicitly.<cwd>/.hyperframes/lambda-stack-<name>.json— project-local on purpose. A developer runningdeployin two different worktrees gets two distinct state files.Test plan
bun run --cwd packages/cli typecheckbunx oxlint/bunx oxfmt --checkhyperframes lambda(subcommand listing) andhyperframes --help(Deploy group visible)docs/packages/cli.mdxsam local invoke— comes with PR 6.6 (lambda-local regression harness, stacked next)Depends on #909.
🤖 Generated with Claude Code