feat(authoring): close VP2 with target-owned terse field presets#326
feat(authoring): close VP2 with target-owned terse field presets#326
Conversation
Move scalar field presets to the owning layer so the TS callback surface and the PSL scalar surface lower to byte-identical contracts. - target-postgres now ships presets (text, int, bigint, float, decimal, boolean, json, bytes, dateTime, createdAt) using the same pg/* codec IDs and native types that the postgres adapter already emits for PSL scalars. This unblocks the terse field.int() / field.boolean() / field.json() / field.float() vocabulary the April milestone flagged as missing. - family-sql drops text, timestamp, and createdAt — their sql/* codec IDs never aligned with any real target's PSL mapping and blocked parity. The generator-aligned presets (uuid, ulid, nanoid, cuid2, ksuid, id.*) stay: their sql/char@1 codec is fixed by the ID generator metadata override in both PSL and TS paths.
Prove VP2 parity end-to-end: author the same contract in PSL and in the TS callback surface using the new target-postgres field presets, and verify both paths emit byte-identical contract.json, storageHash, profileHash, and executionHash. The fixture covers the common scalar vocabulary (int with autoincrement, text with unique, int, boolean with literal default, float optional, json optional, DateTime with now() default) plus a cascading belongsTo relation. It slots into the existing cli.emit-parity-fixtures runner, so no test-harness changes are needed.
Add the formal terseness comparison the April milestone's VP2 task 2 asked for: count semantic lines (non-blank, non-comment) in the callback-mode-scalars fixture's schema.prisma and contract.ts, and assert the ratio stays under 2.1x. Also assert the ratio is strictly tighter than the structural core-surface baseline so regressions in the preset vocabulary (or in the composed helper pipeline) can't silently widen the window. Current numbers: callback-mode-scalars 16→33 = 2.06x, core-surface 24→59 = 2.46x — measurably under the 3–5x baseline the milestone doc called out.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (9)
✅ Files skipped from review due to trivial changes (2)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughFamily-level SQL authoring presets were narrowed to ID-aligned codecs; Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
@prisma-next/mongo-runtime
@prisma-next/family-mongo
@prisma-next/sql-runtime
@prisma-next/family-sql
@prisma-next/extension-paradedb
@prisma-next/extension-pgvector
@prisma-next/postgres
@prisma-next/sql-orm-client
@prisma-next/sqlite
@prisma-next/target-mongo
@prisma-next/adapter-mongo
@prisma-next/driver-mongo
@prisma-next/contract
@prisma-next/utils
@prisma-next/config
@prisma-next/errors
@prisma-next/framework-components
@prisma-next/operations
@prisma-next/contract-authoring
@prisma-next/ids
@prisma-next/psl-parser
@prisma-next/psl-printer
@prisma-next/cli
@prisma-next/emitter
@prisma-next/migration-tools
@prisma-next/vite-plugin-contract-emit
@prisma-next/runtime-executor
@prisma-next/mongo-codec
@prisma-next/mongo-contract
@prisma-next/mongo-value
@prisma-next/mongo-contract-psl
@prisma-next/mongo-contract-ts
@prisma-next/mongo-emitter
@prisma-next/mongo-schema-ir
@prisma-next/mongo-query-ast
@prisma-next/mongo-orm
@prisma-next/mongo-pipeline-builder
@prisma-next/mongo-lowering
@prisma-next/mongo-wire
@prisma-next/sql-contract
@prisma-next/sql-errors
@prisma-next/sql-operations
@prisma-next/sql-schema-ir
@prisma-next/sql-contract-psl
@prisma-next/sql-contract-ts
@prisma-next/sql-contract-emitter
@prisma-next/sql-lane-query-builder
@prisma-next/sql-relational-core
@prisma-next/sql-builder
@prisma-next/target-postgres
@prisma-next/target-sqlite
@prisma-next/adapter-postgres
@prisma-next/adapter-sqlite
@prisma-next/driver-postgres
@prisma-next/driver-sqlite
commit: |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
test/integration/test/authoring/callback-mode-terseness.test.ts (1)
14-18: Exclude block comments from semantic-line counting.Current counting ignores
//comments only./* ... */comments will inflate the metric and can cause noisy terseness regressions.Proposed hardening
function countSemanticLines(source: string): number { - return source + const withoutBlockComments = source.replace(/\/\*[\s\S]*?\*\//g, ''); + return withoutBlockComments .split('\n') .map((line) => line.trim()) .filter((line) => line.length > 0 && !line.startsWith('//')).length; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/test/authoring/callback-mode-terseness.test.ts` around lines 14 - 18, countSemanticLines currently only excludes lines starting with '//' and thus counts block comments; to fix, first strip all block comments from the source (e.g. remove matches of /\/\*[\s\S]*?\*\//g) inside countSemanticLines, then split into lines and for each line remove any trailing inline '//' comments (e.g. strip /\/\/.*$/ from each line) before trimming and applying the existing emptiness check; update the logic in the countSemanticLines function so block comments and inline single-line comments are removed prior to counting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@test/integration/test/authoring/callback-mode-terseness.test.ts`:
- Around line 14-18: countSemanticLines currently only excludes lines starting
with '//' and thus counts block comments; to fix, first strip all block comments
from the source (e.g. remove matches of /\/\*[\s\S]*?\*\//g) inside
countSemanticLines, then split into lines and for each line remove any trailing
inline '//' comments (e.g. strip /\/\/.*$/ from each line) before trimming and
applying the existing emptiness check; update the logic in the
countSemanticLines function so block comments and inline single-line comments
are removed prior to counting.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 83e83cba-2adc-43c3-9058-04e6201475ac
📒 Files selected for processing (8)
packages/2-sql/9-family/src/core/authoring-field-presets.tspackages/3-targets/3-targets/postgres/src/core/authoring.tspackages/3-targets/3-targets/postgres/src/core/descriptor-meta.tstest/integration/test/authoring/callback-mode-terseness.test.tstest/integration/test/authoring/parity/callback-mode-scalars/contract.tstest/integration/test/authoring/parity/callback-mode-scalars/expected.contract.jsontest/integration/test/authoring/parity/callback-mode-scalars/packs.tstest/integration/test/authoring/parity/callback-mode-scalars/schema.prisma
…on type
Closes the last gap in VP2's user story — "a representative contract
(multiple models, relations, at least one extension type)". The fixture
now declares an `Embedding` named type via `@pgvector.column(length: 1536)`
in PSL and via the `type.pgvector.vector(n)` callback-mode helper
contributed by the real `@prisma-next/extension-pgvector/pack`, and uses
it through `field.namedType(types.Embedding).optional()` on the `User`
model.
Terseness actually improves: 20 PSL → 38 TS = 1.90x (was 16 → 33 = 2.06x),
because a reusable named type amortizes better in TS than the 3-line
`types { ... }` block in PSL. Still well under the 2.1x test bound in
callback-mode-terseness.test.ts, with meaningful breathing room for
future drift.
No changes to any package source — the fix is entirely in the fixture.
wmadden
left a comment
There was a problem hiding this comment.
Approving preemptively — this is solid work. Please address or respond to the inline comments before merging; no re-review needed.
Overall: Clean, well-scoped PR. The codec alignment is verified exhaustively, the terseness ratchet test is a good safeguard, and the family→target preset migration is the right architectural move.
Findings not attached inline
F01 — PR description inconsistency: The “Why this exists” section references Embedding = Bytes @pgvector.column(length: 1536), which doesn’t match the actual fixture PSL (Embedding = pgvector.Vector(1536)). The “Representative contract” section is correct. Minor confusion risk — consider clarifying, since the PR body is durable context for the merge commit.
F03 — Demo contract still uses verbose structural DSL: examples/prisma-next-demo/prisma/contract.ts is the most visible example in the repo and still uses field.generated(uuidv4()).id(), field.column(textColumn), etc. This is the natural place a stakeholder would look to evaluate the VP2 claim, and right now it contradicts the milestone’s deliverable. The file already imports the required packs — converting to callback mode with the terse presets would be a net reduction in lines/imports and a real-world proof point alongside the test fixture.
F05 — node:path in new test: The workspace rule recommends pathe, but all 33 existing integration test files use node:path. Consistent with practice; no action needed.
F07 — No runtime codec for pg/bytea@1: Pre-existing gap (PSL path has the same string). Not introduced by this PR.
Full review artifacts: wip/review-code/pr-326/ (spec.md, system-design-review.md, code-review.md, walkthrough.md).
| function countSemanticLines(source: string): number { | ||
| return source | ||
| .split('\n') | ||
| .map((line) => line.trim()) | ||
| .filter((line) => line.length > 0 && !line.startsWith('//')).length; |
There was a problem hiding this comment.
F02 — countSemanticLines duplicated across test files
The JSDoc says this matches the heuristic in the contract-psl ts-psl-parity test. If that copy drifts, the two tests would measure terseness differently. Consider extracting to a shared test utility (e.g. alongside authoring-parity-test-helpers.ts) and importing from both locations.
| export default postgresPack as typeof postgresTargetDescriptorMeta & { | ||
| readonly __codecTypes?: CodecTypes; | ||
| }; |
There was a problem hiding this comment.
F04 — __codecTypes phantom bolted on via cast at the export site
Same pattern in all pack exports (sqlite, pgvector, mongo-target). Now that the PackRef brands are gone, this phantom is the only reason the cast exists. It could live on the descriptor meta declaration instead (e.g. as const & { readonly __codecTypes?: CodecTypes } in descriptor-meta.ts), making pack.ts a plain re-export with no cast.
This PR touched every one of these files and had the opportunity to clean this up.
| nativeType: 'int4', | ||
| }, | ||
| }, | ||
| bigint: { |
There was a problem hiding this comment.
F06 — 4 of 10 presets lack parity fixture coverage (deferred)
bigint, decimal, bytes, and dateTime are defined here but not exercised by any parity fixture. If the PSL mapping or codec IDs change for these scalars, the misalignment would go undetected.
I understand this is a deliberate scope decision (April stop condition: terseness, not vocabulary coverage). Noting it for visibility — please track the follow-up.
closes TML-2243
Why this exists
VP2's user story in the April milestone is specific: "I author a representative contract (multiple models, relations, at least one extension type) in both PSL and the new TypeScript authoring surface. The TypeScript version is in the same ballpark of length as the PSL version." Until this branch the claim was unprovable end-to-end — every parity fixture under
test/integration/test/authoring/parity/used the verbose structural DSL (field.column(int4Column)), and the only place that exercised the tersefield.text()/field.int()style wired itself to mock packs. The real@prisma-next/family-sql/packdid shiptext,timestamp,createdAt, but bound them tosql/*codec IDs that never aligned with the postgres adapter'spg/*PSL mapping — so the emitted contracts diverged, and a PSL / TS byte-identical parity test was impossible to write.field.int(),field.boolean(),field.float(),field.json()weren't shipped at all.The representative contract
A two-model shape with seven scalar types, a
pgvectornamed extension type, and a cascading relation — small enough to read in one screen, large enough to tick every ingredient the VP2 user story calls out. This is the fixture the terseness assertion measures against.PSL (schema.prisma, 20 semantic lines):
types { Embedding = Bytes @pgvector.column(length: 1536) } model User { id Int @id @default(autoincrement()) email String @unique age Int isActive Boolean @default(true) score Float? profile Json? embedding Embedding? createdAt DateTime @default(now()) } model Post { id Int @id @default(autoincrement()) userId Int title String rating Float? user User @relation(fields: [userId], references: [id], onDelete: Cascade, onUpdate: Cascade) }TS callback mode (contract.ts, 38 semantic lines):
Between them, the two models exercise every new preset (
field.int(),field.text(),field.boolean(),field.float(),field.json(),field.createdAt()), the extension-contributed type constructor (type.pgvector.vector(n)), and the chainable modifiers the old structural DSL already supported (.unique(),.default(literal),.optional(),.defaultSql(expr),.id(),.sql({ fk })) — the shape of everyday CRUD code, not a pathological edge case. All four ingredients from the VP2 user story (multiple models, relation, extension type, ballpark ratio) are satisfied by this one fixture.The numbers
20 PSL → 38 TS = 1.90x. The pre-existing
core-surfaceparity fixture, which uses the structural DSL for similar scalars, is 2.46x. Thepgvector-named-typefixture (structural DSL, with extension type but no scalars) is 3.43x. The April milestone doc put the previous baseline at 3–5x. The callback-mode fixture is now the tersest parity case in the repo — notably, adding thepgvectorextension type improved the ratio (was 2.06x without it), because PSL spends three lines on atypes { ... }block for a single named type while the TS equivalent amortizes better across the callback factory.The ratio is locked as a test invariant in callback-mode-terseness.test.ts, which asserts two things: ratio ≤ 2.1x PSL, and strictly tighter than the structural
core-surfacebaseline. Either threshold failing is a loud test regression, not silent drift.What shipped
@prisma-next/target-postgres/packnow ships the scalar preset vocabulary:text,int,bigint,float,decimal,boolean,json,bytes,dateTime,createdAt. Each preset is bound to the exact(codecId, nativeType)pair the postgres adapter already emits for the corresponding PSL scalar, so PSL and TS produce byte-identical contracts as a consequence of codec alignment — not as a separately enforced invariant.text,timestamp,createdAt) whosesql/*codec IDs stamped the wrong thing on authored columns. No codecs were removed —sql/text@1and friends still exist and are still registered in codecs.ts (L611–L639); only the three authoring-time shortcuts that miswired them are gone. Generator-aligned presets (uuid,ulid,nanoid,cuid2,ksuid,field.id.*) stay in family-sql because@prisma-next/idsmetadata pins their column descriptor tosql/char@1in both PSL and TS paths regardless of target.cli.emit-parity-fixturesrunner (samecontract.json,storageHash,profileHash,executionHashfrom both surfaces), including thepgvectorextension type wired via the real@prisma-next/extension-pgvector/pack— not a mock. The terseness ratio is locked as described above.Risks and non-goals
The removal from
@prisma-next/family-sql/packis a breaking change on paper: anyone importingfield.text()/field.timestamp()/field.createdAt()from the real family pack would now fail to resolve. Repo grep confirms no real call site does — only mock-pack tests define their own inline copies. The full integration suite (366 tests, including all 13 pre-existing parity fixtures) passes with noexpected.contract.jsonchurn.Follow-up I'm explicitly leaving for May: 4 of the 10 presets I shipped (
bigint,decimal,bytes,dateTime) aren't exercised by any parity fixture, so they could drift silently. The April milestone's stop condition is explicit — "Then stop — authoring ergonomics, API naming, and syntactic sugar are May" — so hunting down vocabulary coverage is out of scope for VP2. Tracking as a follow-up.Deliberately out of scope: SQLite or any second SQL target — it will ship its own preset vocabulary when it lands; parameterized types (
varchar(n),numeric(p, s),time(p)) — deferred until a real caller needs them, VP2's stop condition is terseness, not vocabulary coverage; and re-pointing the postgres adapter's PSL scalar descriptors to family-levelsql/*codec IDs — considered and rejected, 250+ files assert onpg/*in emitted contracts, for no corresponding parity win.Summary by CodeRabbit
New Features
Chores
Tests