Skip to content

feat(cli): add hyperframes lambda deploy/render/progress/destroy#910

Merged
jrusso1020 merged 5 commits into
mainfrom
feat-lambda-cli
May 17, 2026
Merged

feat(cli): add hyperframes lambda deploy/render/progress/destroy#910
jrusso1020 merged 5 commits into
mainfrom
feat-lambda-cli

Conversation

@jrusso1020
Copy link
Copy Markdown
Collaborator

@jrusso1020 jrusso1020 commented May 16, 2026

What

Adds hyperframes lambda to the CLI so an end-to-end Lambda render is three commands instead of the ~8 manual bun + sam + aws s3 + aws stepfunctions steps that examples/aws-lambda/scripts/smoke.sh does today.

hyperframes lambda deploy
hyperframes lambda render ./my-project --width 1920 --height 1080 --wait
hyperframes lambda destroy

Subcommands:

  • deploy — builds packages/aws-lambda/dist/handler.zip and runs sam deploy against the Phase 6a SAM template. Persists { bucketName, stateMachineArn, functionName, region, lambdaMemoryMb } to <cwd>/.hyperframes/lambda-stack-<name>.json so subsequent verbs don't need to re-derive them.
  • sites create <projectDir> — pre-uploads a project with a content-addressed siteId (wraps deploySite). Multiple renders of the same tree share one upload.
  • render <projectDir> — wraps renderToLambda. Returns a renderId immediately, or --wait blocks and streams per-chunk progress + accrued cost.
  • progress <renderId | executionArn> — wraps getRenderProgress. Accepts a bare renderId (resolved against the stack's state-machine ARN) or a full ARN.
  • destroysam delete --no-prompts + drops the local state file. The render bucket is Retain'd by the template; documented in --help + cli.mdx.

Why

PR #909 (the SDK + CDK) made the Lambda surface programmatic; this PR makes it usable from a terminal. The two surfaces are kept in lockstep by having the CLI consume the SDK directly — no duplicate logic. Per DISTRIBUTED-RENDERING-PLAN.md § 11 Phase 6b, this is the "headline UX win" of Phase 6b.

How

  • New packages/cli/src/commands/lambda.ts dispatcher + packages/cli/src/commands/lambda/{deploy,sites,render,progress,destroy,sam,state,repoRoot}.ts.
  • Registered in packages/cli/src/cli.ts subCommands and added to packages/cli/src/help.ts GROUPS under a new "Deploy" section.
  • docs/packages/cli.mdx extended with a full hyperframes lambda section covering prerequisites, all five subcommands, and state-file semantics.
  • To keep @sparticuz/chromium out of the CLI's transitive dep graph, this PR also adds a dedicated ./sdk subpath export to @hyperframes/aws-lambda and wires the CLI to import from @hyperframes/aws-lambda/sdk exclusively. The . barrel is unchanged (still re-exports both handler + SDK for adopters who want one entry point).

Notable design calls:

  • deploy shells out to sam deploy rather than driving the CloudFormation API programmatically. SAM handles rollback-on-failure semantics correctly and re-implementing them in TypeScript would duplicate a non-trivial chunk of the SAM CLI. CDK adopters use HyperframesRenderStack directly from their own CDK app.
  • --concurrency defaults to 8 (low) so first-time users don't get surprise-billed by a runaway Map state. Adopters tune up explicitly.
  • State files live at <cwd>/.hyperframes/lambda-stack-<name>.json — project-local on purpose. A developer running deploy in two different worktrees gets two distinct state files.

Test plan

  • Unit tests added — 5 tests on the state-file round-trip
  • Typecheck clean: bun run --cwd packages/cli typecheck
  • Lint + format clean: bunx oxlint / bunx oxfmt --check
  • CLI help renders correctly: hyperframes lambda (subcommand listing) and hyperframes --help (Deploy group visible)
  • Documentation updated — docs/packages/cli.mdx
  • CLI integration against sam local invoke — comes with PR 6.6 (lambda-local regression harness, stacked next)
  • Real-AWS end-to-end smoke against the CLI — maintainer-run after the stack merges

Depends on #909.

🤖 Generated with Claude Code

Copy link
Copy Markdown
Collaborator

@miguel-heygen miguel-heygen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review: feat(cli): add hyperframes lambda deploy/render/progress/destroy

Approve — well-structured CLI surface for the Lambda deployment flow.

What I verified

  • Command structure: Follows existing CLI patterns (citty defineCommand, positional args, _examples.ts exports). Subcommands are properly routed via switch.
  • Security: All subprocess calls use spawnSync/execFileSync with array args — no shell injection risk. SAM deploy params built as array, not string interpolation. AWS profile passed via --profile flag, not env manipulation.
  • SAM integration: assertSamAvailable() gate before deploy. locateSamTemplate() resolves from repo root. fetchStackOutputs() parses CFN outputs correctly.
  • State management: Local .hyperframes-lambda.json state file for stack name/region persistence between commands. state.test.ts covers read/write/default paths.
  • Render flow: Proper polling loop with configurable interval, timeout handling, and --wait mode that blocks until complete.
  • Sites subcommand: Clean S3 upload with content-type detection and zip/compile/upload pipeline.
  • Error handling: Each subcommand has specific error messages with actionable hints.

CI note

6 perf test failures (drift, fps, load, parity, scrub, player-perf) — these look like base-branch version bump conflicts, not issues with the CLI code itself. The CLI code doesn't touch engine/player paths.

Non-blocking observations

  1. repoRoot.ts walks up from __dirname looking for package.json with workspaces — this only works when running from the monorepo checkout. Worth a comment noting this won't work from a globally-installed hyperframes CLI.
  2. The --chunk-count default of 4 is hardcoded in render.ts args — consider making this configurable via the state file or environment.

Copy link
Copy Markdown
Collaborator

@vanceingalls vanceingalls left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One-line summary: solid CLI scaffold and a real UX win, but several flags are silently dropped between dispatcher and SDK call, and --memory doesn't actually parameterize the deploy — that's a correctness bug, not a polish item.

Audited: packages/cli/src/commands/lambda.ts, lambda/{deploy,destroy,render,progress,sites,sam,repoRoot,state,state.test}.ts, packages/aws-lambda/src/sdk/index.ts, docs/packages/cli.mdx, template.yaml Parameters + Outputs.
Trusting: the SDK internals (renderToLambda, deploySite, getRenderProgress) — read integration points only.

Strengths

  • state.ts:35-46 round-trips through disk with a thin enough abstraction that the test covers it well. Malformed-JSON returns null instead of throwing (state.ts:55) — that's the right call for a local cache file.
  • Lazy import("./lambda/<verb>.js") in the dispatcher (lambda.ts:128, 141, 161, 196, 208) keeps the CLI's hot-path startup cost off the Lambda subcommands. Good instinct.
  • requireStack (state.ts:74-90) is exactly the right shared error surface — single hint message, lists known stacks, exits 1.

Blockers

  • --memory is silently dropped on deploy. lambda.ts:147 parses --memory and deploy.ts:43, 75 records it in the state file as lambdaMemoryMb. But samDeploy (sam.ts:78-94) only forwards ChromeSource + ReservedConcurrencyLambdaMemoryMb is never in --parameter-overrides. The SAM template (template.yaml:21-28) accepts LambdaMemoryMb, but it'll always resolve to the template default (10240). Worse: cost math in progress.ts:32 (defaultMemorySizeMb: stack.lambdaMemoryMb) reads the recorded value, not the actual deployed value, so --memory 5120 produces wrong cost numbers downstream. Either forward the parameter to SAM, or remove the flag from --help until it works.

  • --profile is silently dropped on render and progress. lambda.ts:122 accepts profile. runDeploy and runDestroy consume it. But runRender (lambda.ts:154-191) and runProgress (lambda.ts:199-204) don't take or forward an awsProfile. The SDK calls under them will fall back to the default credentials chain — a user with --profile prod will silently render against their default account. This is the most dangerous flavor of dropped flag (wrong-account billing). Either thread awsProfile into the SDK calls or strip --profile from the help text and document it as deploy/destroy-only.

Important

  • render --wait --json emits two concatenated JSON blobs. render.ts:74-82 prints the handle as JSON, then waitForCompletion prints another full JSON snapshot on terminal state (render.ts:114). Result is not a valid single JSON document — jq will read the first and treat the rest as trailing garbage on strict parsers. Either NDJSON it explicitly (and document the format) or emit only the final progress snapshot in --json --wait mode.

  • destroy doesn't check for in-flight renders before sam delete. destroy.ts:21 straight-shells to sam delete --no-prompts. A Step Functions execution in RUNNING state isn't a stack-deletion blocker — CloudFormation will tear down the state machine while executions are still marching, aborting them mid-render. At minimum: list executions --status-filter RUNNING via the SDK and either warn (and require --force) or wait. The PR description's spec for this verb actually calls this out as a failure mode.

  • runDestroy.awsProfile ignores AWS_PROFILE. deploy.ts:42 falls back to process.env.AWS_PROFILE, but destroy.ts:25 only takes the explicit flag (lambda.ts:208 passes args.profile without env fallback). Same AWS_PROFILE=foo bash -c "hyperframes lambda deploy && hyperframes lambda destroy" will use the env profile for deploy and the default chain for destroy. Mirror the env fallback.

  • width / height accept negative integers. lambda.ts:155-164: parseIntFlag(args.width) returns -100 for --width=-100, the !width truthiness check accepts it (since -100 is truthy), and the value flows into the SDK config. Validation should be width > 0 && Number.isInteger(width). Same for chunk-size, max-parallel-chunks, memory, concurrency. parseIntFlag is the right place to bound these.

  • Faked SiteHandle when --site-id is passed is contract-fragile. render.ts:43-50 constructs {siteId, projectS3Uri, bytes: 0, uploadedAt: "", uploaded: false} and hands it to renderToLambda. This is only correct if the SDK treats siteHandle as a pure-by-id lookup and ignores the bag's other fields when uploaded: false. If the SDK ever reads bytes or uploadedAt (logging, telemetry, validation, idempotency), this silently corrupts those fields. Two cleaner options: (a) renderToLambda should accept {siteId: string} as a discriminated variant and resolve the rest via HeadObject; (b) the CLI should call deploySite({siteId}) for the resolve-only path. The current "lie convincingly to the SDK" shape isn't durable.

  • DEFAULT_STACK_NAME is referenced twice with different resolved values. state.ts:24 defines DEFAULT_STACK_NAME = "default". deploy.ts:38 builds the default as `hyperframes-${DEFAULT_STACK_NAME}`"hyperframes-default". lambda.ts:117 hardcodes "hyperframes-default" again. requireStack (state.ts:81) compares against the literal "default" for hint-formatting, so the hint always emits --stack-name=hyperframes-default even when the user is in the default state. Pick one default string and centralize. The duplication will rot.

  • Installed-package users get a confusing failure. repoRoot.ts:14-26 walks up looking for packages/aws-lambda/package.json; locateSamTemplate (sam.ts:35-43) looks for examples/aws-lambda/template.yaml. Neither exists outside a hyperframes checkout. The PR docs (cli.mdx:99-100) only list SAM CLI + bun + AWS creds as prerequisites — there's no mention that you also need a hyperframes source checkout. Either ship the template inside the package (and update locateSamTemplate to resolve from __dirname), or update the docs and --help to say "run from a hyperframes checkout."

  • Unit-test coverage is thin. state.test.ts covers only the state-file round-trip. No coverage for: dispatcher subcommand routing, the enum parsers (parseFormat/parseCodec/parseQuality/parseChromeSource), parseIntFlag, executionArnFromName, or the --json modes. The PR description punts integration tests to PR 6.6, which is fine, but the pure-function surface here is testable today and the bugs above (silent flag drops, negative width, --memory not plumbed, double-JSON) would all be caught by unit tests on the dispatcher.

Nits

  • sam.ts:118-122: functionName.split(":").pop() extracts the last ARN segment. Works for function:name ARNs, breaks for qualified-version ARNs (function:name:1). SAM doesn't qualify by default so it's not a today-bug, but a regex match against :function:([^:]+)(:|$) would be more honest.
  • deploy.ts:103-113: no assertBunAvailable() parallel to assertSamAvailable/assertAwsCliAvailable. A missing bun errors with spawn bun ENOENT instead of the nice hint the other tools get.
  • progress.ts:55: progress.outputFile.bytes ?? "?" — the human format mixes ? and numbers in a bytes field. A small humanBytes() helper would read better.
  • lambda.ts:233-237: the parseEnum<T> helper throws on bad input; the dispatcher's other validation paths use console.error + process.exit(1). Either flow is fine but the inconsistency is jarring — a thrown error will print a stack trace, the exit(1) path won't.

Notes

  • CI: player-perf and Perf: drift are failing on this PR (https://github.com/heygen-com/hyperframes/actions/runs/25976583026/job/76357695595). These are optional checks (mergeable_state: UNSTABLE, not BLOCKED); not gating this verdict. Worth a quick rerun to confirm they're the usual flakes and not a regression from this PR.
  • The feat-lambda-sdk-cdk-construct base is the stack target, not main. Confirmed via baseRefName on the PR; verdict is on the diff vs. that base.

Verdict: REQUEST CHANGES
Reasoning: Two flags (--memory, --profile) silently drop between dispatcher and AWS calls — --memory makes cost accounting wrong, --profile is a wrong-account-billing footgun. Both fixes are small. The --json --wait double-blob, the destroy-while-rendering hazard, and the faked-SiteHandle shape are next on the list. Architecture and abstractions are right; landing zone is the contract between dispatcher flags and SDK calls.

Review by Vai

@jrusso1020 jrusso1020 force-pushed the feat-lambda-cli branch 2 times, most recently from d49753e to 30deb7b Compare May 17, 2026 00:42
miguel-heygen
miguel-heygen previously approved these changes May 17, 2026
Copy link
Copy Markdown
Collaborator

@miguel-heygen miguel-heygen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both blockers from the previous review are addressed:

1. --memory flag silently dropped — Fixed. The memory arg is parsed via parsePositiveInt(args.memory, "--memory") in the lambda.ts command dispatcher (line 385) and forwarded to runDeploy as lambdaMemoryMb. In deploy.ts, it flows through resolved.lambdaMemoryMb into samDeploy(). In sam.ts, samDeploy() explicitly pushes LambdaMemoryMb=${opts.lambdaMemoryMb} into paramOverrides (line 1077) which are spread into --parameter-overrides. The full chain is intact: CLI flag -> DeployArgs -> samDeploy -> SAM parameter-overrides.

2. --profile silently dropped on render + progress — Fixed. The lambda.ts dispatcher sets process.env.AWS_PROFILE = profileFlag globally before the subcommand switch (line 372), so all downstream AWS SDK clients and CLI calls (render, progress, sites, destroy) inherit the profile via the environment. Additionally, samDeploy, samDelete, and fetchStackOutputs all explicitly pass --profile when awsProfile is set.

vanceingalls
vanceingalls previously approved these changes May 17, 2026
Copy link
Copy Markdown
Collaborator

@vanceingalls vanceingalls left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-review of 30deb7b. Both consumer-drop blockers resolved end-to-end; bundling fix verified by inspecting the built dist/cli.js.

Verified fixes

  • Blocker 1 — --memory threaded into SAM. sam.ts:86-88 now pushes LambdaMemoryMb=${opts.lambdaMemoryMb} into --parameter-overrides, deploy.ts:71 forwards from the resolved args, lambda.ts:151 parses via parsePositiveInt. End-to-end: --memory 5120 now reaches CloudFormation AND matches the value stored in state for progress.ts's cost math.
  • Blocker 2 — --profile reaches render/progress. Solved at the dispatcher (lambda.ts:137-140) by setting process.env.AWS_PROFILE before any subverb runs. AWS SDK v3's default provider chain reads the env, so render/progress/sites all benefit without per-subverb plumbing. Cleaner than threading an awsProfile arg through every SDK call. Same treatment for --region via AWS_REGION. (Note: SDK clients are still constructed with { region: stack.region }, which correctly wins over the env so the state file remains the region source-of-truth on render.)
  • --wait --json single JSON document. render.ts:80-92: with wait set, only waitForCompletion's terminal snapshot is emitted (line 133). Without wait, only the handle. The double-blob path is gone — jq -r will parse cleanly now.
  • Negative integer rejection. parsePositiveInt (lambda.ts:254-261) throws on n < 1 or non-integer. Applied to --width, --height, --chunk-size, --max-parallel-chunks, --memory, --concurrency, --wait-interval-ms. --fps keeps the explicit 24|30|60 allow-list at lambda.ts:194-197.
  • destroy AWS_PROFILE env fallback. destroy.ts:32 now matches deploy.ts:40: args.awsProfile ?? process.env.AWS_PROFILE. AWS_PROFILE=prod hyperframes lambda destroy now hits the same account deploy did.
  • DEFAULT_STACK_NAME centralised. Single literal "hyperframes-default" at state.ts:35; the three `hyperframes-${DEFAULT_STACK_NAME}` template concatenations are gone (grep confirms — only the literal is referenced now).
  • Bundling fix verified. tsup.config.ts lists @hyperframes/aws-lambda + @hyperframes/aws-lambda/sdk as external, with the SDK subpath alias still in place so esbuild's subpath-as-file misfire doesn't bite. Moved to dependencies so installed-package users resolve at runtime. Inspected the built dist/cli.js (4.69 MB) — @hyperframes/aws-lambda/sdk survives as an import specifier, none of @aws-sdk/client-sfn, @aws-sdk/client-s3, or the splitStream symbol that was tripping esbuild is inlined.

Open follow-ups (next PR, not blocking)

  • destroy doesn't check for in-flight SFN executions. Still straight-shells sam delete --no-prompts at destroy.ts:28. A RUNNING execution will get aborted mid-render when CloudFormation tears down the state machine. Same shape as the original finding; commit message correctly scoped this out. Worth a ListExecutions --status-filter RUNNING + warn-or---force gate in a follow-up. (important)
  • Installed-package users still hit the repo-checkout assumption. sam.ts:68 resolves examples/aws-lambda/template.yaml from repoRoot(), which doesn't exist outside a hyperframes checkout. Either ship the template inside @hyperframes/aws-lambda and resolve from __dirname, or update cli.mdx:99-100 + --help to require a source checkout. (important)
  • Unit-test coverage still thin. state.test.ts is the only test file. parsePositiveInt, parseEnum, executionArnFromName, and the --wait --json single-document mode would all benefit from one-screen unit tests today — they'd have caught all four importants above. (important)
  • Faked-SiteHandle placeholder fields. render.ts:58-67 still hands {bytes: 0, uploadedAt: ""} to the SDK; the contract-fragility concern is unchanged, but it's mitigated by the SDK only reading projectS3Uri when uploaded: false. Cleaner long-term shape would be a {siteId}-only discriminated variant on renderToLambda. (nit now that bucketName is wired)

Verdict: APPROVE
Reasoning: Both consumer-drop blockers are fixed end-to-end; the dispatcher-level process.env.AWS_PROFILE approach is a cleaner fix than threading the flag per-subverb. The bundle external is verified in the built dist/cli.js. Remaining importants are scope-deferred and the commit message says so honestly. Land it; follow-up issues for the three opens above.

Review by Vai (re-review)

@jrusso1020 jrusso1020 changed the base branch from feat-lambda-sdk-cdk-construct to main May 17, 2026 07:02
@jrusso1020 jrusso1020 dismissed stale reviews from vanceingalls and miguel-heygen May 17, 2026 07:02

The base branch was changed.

Wraps the @hyperframes/aws-lambda SDK + the Phase 6a SAM template behind
a single CLI surface so an end-to-end render is three commands instead
of the ~8 manual bun+sam+aws steps the smoke script does today:

  hyperframes lambda deploy
  hyperframes lambda render ./my-project --width 1920 --height 1080 --wait
  hyperframes lambda destroy

Subcommands:
  - deploy:        build handler.zip + sam-deploy + persist stack outputs
                   to <cwd>/.hyperframes/lambda-stack-<name>.json
  - sites create:  pre-upload a project to S3 with a stable content hash
                   so re-renders skip the tar+PUT pass
  - render:        start a Step Functions execution; --wait blocks and
                   streams per-chunk progress + accrued cost
  - progress:      one-shot snapshot — status, frames, cost breakdown,
                   errors. Accepts renderId or executionArn
  - destroy:       sam-delete + drop the local state file (S3 bucket
                   is Retain'd by the template; documented in --help
                   and in docs/packages/cli.mdx)

To keep @sparticuz/chromium out of the CLI's transitive deps, this also
adds a dedicated ./sdk subpath export to @hyperframes/aws-lambda; the
CLI imports from @hyperframes/aws-lambda/sdk exclusively. The existing
. barrel still re-exports both handler + SDK for adopters who want one
entry point.

Defaults are deliberately cost-conservative for first-time users:
--concurrency=8 (low enough to never surprise) and --memory=10240 (the
common case; documented for adopters who want to tune down).

Tests: 5 unit tests on the state-file round-trip. CLI integration
against sam local invoke is part of the upcoming PR 6.6 (lambda-local
regression harness).
Two small cleanups on top of the lambda CLI:

  - Replace parseFormat / parseCodec / parseQuality / parseChromeSource
    (four near-identical helpers) with a single generic parseEnum() +
    typed const-tuple lookups. The four callers now read as one-line
    arrow functions that lift the allowed values out of the function
    body so they're easy to extend.

  - DEFAULT_STACK_NAME was const-declared then re-exported at the
    bottom of state.ts; just mark the const export inline.

No behavior changes. All CLI tests still pass.
esbuild can't bundle @hyperframes/aws-lambda's transitive AWS SDK
deps (@aws-sdk/* + @smithy/*) cleanly into a node binary — the
SDK's .browser.js conditional re-exports break the resolver:

  ESM Build failed
    No matching export in "splitStream.browser.js" for import
    "splitStream" (and ~10 similar errors)

Mark aws-lambda as `external` so esbuild doesn't follow it, and
move it from devDependencies to dependencies so the published CLI
can resolve it from node_modules at runtime. The lambda subverb
files dynamic-import only on `hyperframes lambda *` invocation, so
the CLI cold-start cost is unchanged.

The install-size hit (AWS SDK + @sparticuz/chromium ≈ 200 MiB) is
documented as a v1 tradeoff; a future split into a lambda-sdk-only
subpackage can pare this back.
Two blockers + four important items from Vai's review:

  - `--memory` was parsed and recorded in the local state file but
    never forwarded to `sam deploy` as a parameter override. Worse,
    `progress.ts` then read the *recorded* value for cost math, so
    `--memory 5120` produced wrong cost numbers downstream. Thread
    `LambdaMemoryMb` through samDeploy's --parameter-overrides.

  - `--profile` was only consumed by deploy / destroy. render and
    progress fell back to the default credentials chain — a user
    with `--profile prod` would silently render against their
    default account (wrong-account billing footgun). Set
    `process.env.AWS_PROFILE` (and `AWS_REGION`) in the dispatcher
    before any subverb runs; the AWS SDK reads them natively, so
    render / progress / sites all benefit without each subverb
    threading the flag through the SDK call.

  - `--profile` + destroy now also reads `process.env.AWS_PROFILE`
    as a fallback (matching deploy's existing env fallback).

  - `--wait --json` printed both the start handle AND the final
    progress snapshot, producing two concatenated JSON blobs that
    `jq` rejected. Now emits a single document: handle (without
    --wait) OR final progress (with --wait).

  - Negative integers on `--width` / `--height` / `--chunk-size` /
    `--max-parallel-chunks` / `--memory` / `--concurrency` now fail
    loudly via a new `parsePositiveInt` wrapper instead of flowing
    into the SDK and producing opaque AWS validation errors mid-
    render.

  - `DEFAULT_STACK_NAME` is now centralized to the literal
    `"hyperframes-default"` and consumed from one place. Previously
    the value was assembled as `hyperframes-${"default"}` in three
    sites and hardcoded as `"hyperframes-default"` in a fourth.
    `requireStack`'s hint now matches the dispatcher's default.

The faked `SiteHandle` for `--site-id` keeps the documented
placeholder fields but also surfaces `bucketName` (from PR 909's
extended SiteHandle interface), matching the SDK contract.

All CLI unit tests + the full bundler build still pass.
Copy link
Copy Markdown
Collaborator

@miguel-heygen miguel-heygen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-approve after rebase onto main. Diff verified unchanged — --memory and --profile flags still forwarded end-to-end through SDK calls.

Copy link
Copy Markdown
Collaborator

@vanceingalls vanceingalls left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-approve after rebase onto main. Force-push dismissed my prior --approve (require_last_push_approval: true) — content unchanged, same commits replayed on the new base. All findings from the prior review's resolution still apply.

Re-review by Vai (post-rebase re-stamp)

The "Smoke: global install" CI step packs the CLI via `npm pack` and
installs it globally via `npm install -g <tgz>`. npm doesn't understand
the workspace: protocol, so a runtime `dependencies` entry of
`@hyperframes/aws-lambda: workspace:*` blows up with:

  npm error code EUNSUPPORTEDPROTOCOL
  npm error Unsupported URL Type "workspace:": workspace:*

(pnpm rewrites workspace:* on publish; npm pack doesn't.)

Three changes to unblock the smoke + keep the published CLI install
small for users who don't deploy to Lambda:

  - Move `@hyperframes/aws-lambda` from CLI's `dependencies` back to
    `devDependencies`. It's already external in tsup.config.ts; the
    bundle references it via runtime resolution only.

  - Convert the static `import { … } from "@hyperframes/aws-lambda/sdk"`
    in sites.ts / render.ts / progress.ts to `await import()` inside
    each function. tsup with `splitting: false` was inlining those
    static imports at the top of the bundle, which made Node eagerly
    resolve them at CLI startup (MODULE_NOT_FOUND before any lambda
    subcommand even runs). Dynamic imports stay dynamic in the bundle.

  - Add a friendly missing-module check in the lambda dispatcher.
    When a user runs `hyperframes lambda deploy / render / sites /
    progress / destroy` without aws-lambda installed, they now see:

      @hyperframes/aws-lambda is not installed.
      The `hyperframes lambda deploy` command needs it at runtime.
      Install it alongside the CLI:
        npm install -g @hyperframes/aws-lambda

Verified locally: pack + global install + `hyperframes init --example
blank` now succeeds end-to-end (was the same scenario the CI smoke job
runs).
Copy link
Copy Markdown
Collaborator

@miguel-heygen miguel-heygen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-approve on 51556b5. Smoke fix (aws-lambda devDep + dynamic imports) reviewed earlier — clean approach.

@jrusso1020 jrusso1020 merged commit e90ad2d into main May 17, 2026
39 checks passed
@jrusso1020 jrusso1020 deleted the feat-lambda-cli branch May 17, 2026 17:06
Copy link
Copy Markdown
Collaborator

@vanceingalls vanceingalls left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-approve after rebase + #910 smoke fix.

#910 adds the CLI smoke fix on top: @hyperframes/aws-lambda moved to devDependencies, dispatcher dynamic-imports @hyperframes/aws-lambda/sdk (lambda.ts:150) with a friendly ERR_MODULE_NOT_FOUNDnpm install handler at :152-158. npm pack / npm install now works because there's no workspace:* protocol in published dependencies. Clean fix.

#912/#913/#914/#915 are pure rebases on top — same commits replayed on the new base, content unchanged vs. the last approved round. Findings from the prior review's resolution still apply.

Re-review by Vai (post-smoke-fix re-stamp)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants