diff --git a/skills/hyperframes/SKILL.md b/skills/hyperframes/SKILL.md
index f68c52dac..19dd09a4f 100644
--- a/skills/hyperframes/SKILL.md
+++ b/skills/hyperframes/SKILL.md
@@ -9,18 +9,24 @@ HTML is the source of truth for video. A composition is an HTML file with `data-
## Approach
-### Discovery (exploratory requests only)
+### Brief (exploratory requests only)
-For open-ended requests ("make me a product launch video", "create something for our brand") where the user hasn't committed to a direction, understand intent before picking colors:
+For open-ended requests ("make me a product launch video", "create something for our brand"), gather what's missing from the prompt before building a design system. For specific requests ("add a title card", "fix the timing on scene 3"), skip this.
-- **Audience** — who watches this? Developers? Executives? General consumers?
-- **Platform** — where does it play? Social (15s), website hero, product demo, internal?
-- **Priority** — what matters most? Motion quality? Content accuracy? Brand fidelity? Speed?
-- **Variations** — does the user want options, or a single best shot?
+Extract what you can from the prompt itself — only ask about what's genuinely missing. Ask **one question at a time** and let each answer inform the next. Combine two questions in one message when they're naturally related, but never dump a list.
-For specific requests ("add a title card", "fix the timing on scene 3"), skip discovery.
+**Format each question as a lettered multiple choice** (A, B, C, D) so the user can respond with just a letter. Include 2-4 contextual options plus the option to give a custom answer. Tailor the options to the specific prompt — don't use generic choices.
-For exploratory requests, consider offering 2-3 variations that differ meaningfully — not just color swaps, but different pacing, energy levels, or structural approaches. One safe/expected, one ambitious. Don't mandate this — it's a tool available when appropriate.
+The questions (in rough priority order):
+
+- **Audience** — who watches this? (Developers, executives, consumers — changes type scale, density, easing)
+- **Emotion** — what should the viewer feel? (Maps directly to mood board clustering in the picker. Sharpen this based on the audience answer — "technical precision or breakthrough excitement?" is better than "what mood?")
+- **Brand assets** — do they have a logo, existing colors, fonts, website? (If yes, palettes and type pairings derive from them. If no, skip — full creative freedom.)
+- **Light or dark?** (Eliminates half the palette options. Can often combine with another question.)
+- **Surface** — where does this play? (Social → portrait/punchy, website hero → landscape/looping, presentation → widescreen/breathable, internal → information-dense)
+- **Key takeaway** — what's the one thing the viewer should remember? (Drives focal hierarchy in architecture previews)
+
+Most prompts already answer 2-3 of these — a typical brief is 2-3 questions, not 6. These answers seed the design picker in Step 1 — pass them when generating mood boards, palettes, and architectures.
### Step 1: Design system
@@ -476,6 +482,7 @@ Skip on small edits (fixing a color, adjusting one duration). Run on new composi
- **[references/techniques.md](references/techniques.md)** — 11 visual techniques with code patterns: SVG drawing, Canvas 2D, CSS 3D, kinetic type, Lottie, video compositing, typing effect, variable fonts, MotionPath, velocity transitions, audio-reactive. Read when planning techniques per beat.
- **[references/narration.md](references/narration.md)** — Pacing, tone, script structure, number pronunciation, opening line patterns. Read when the composition includes voiceover or TTS.
- **[references/design-picker.md](references/design-picker.md)** — Create a design.md via visual picker. Read when no design.md exists and the user wants to create one.
+- **[references/shader-backgrounds.md](references/shader-backgrounds.md)** — GLSL shader patterns for contextual scene backgrounds. Noise library, domain warping, pattern recipes by context (travel, medical, tech, industrial). Read when building shader backgrounds for compositions or the design picker.
- **[visual-styles.md](visual-styles.md)** — 8 named visual styles with hex palettes, GSAP easing signatures, and shader pairings. Read when user names a style or when generating design.md.
- **[house-style.md](house-style.md)** — Default motion, sizing, and color palettes when no design.md is specified.
- **[patterns.md](patterns.md)** — PiP, title cards, slide show patterns.
diff --git a/skills/hyperframes/house-style.md b/skills/hyperframes/house-style.md
index 892c2ebed..3994c02d2 100644
--- a/skills/hyperframes/house-style.md
+++ b/skills/hyperframes/house-style.md
@@ -32,9 +32,9 @@ If the content genuinely calls for one of these — centered layout for a solemn
## Background Layer
-Every scene needs visual depth — persistent decorative elements that stay visible while content animates in. Without these, scenes feel empty during entrance staggering.
+Decoratives add depth when a scene needs it — during entrance staggering, during holds, or when the content is sparse and the frame feels flat. They're a tool, not a requirement.
-Ideas (mix and match, 2-5 per scene):
+Ideas:
- Radial glows (accent-tinted, low opacity, breathing scale)
- Ghost text (theme words at 3-8% opacity, very large, slow drift)
@@ -42,9 +42,7 @@ Ideas (mix and match, 2-5 per scene):
- Grain/noise overlay, geometric shapes, grid patterns
- Thematic decoratives (orbit rings for space, vinyl grooves for music, grid lines for data)
-All decoratives should have slow ambient GSAP animation — breathing, drift, pulse. Static decoratives feel dead.
-
-**Decorative count vs motion count.** The "2-5 per scene" count refers to decorative _elements_. If a project's `design.md` says "single ambient motion per scene", it means one looping motion applied to these decoratives (a shared breath/drift/pulse) — not one element total. A scene with 4 decoratives sharing one breathing motion is correct; a scene with 1 decorative is under-dressed.
+Use as many or as few as the scene earns. A data-dense scene with 4 decoratives and shared breathing motion is rich. A single word on a black frame with no decoration is powerful. Both are valid — the question is whether the density matches the emotional beat.
## Motion
diff --git a/skills/hyperframes/references/design-picker.md b/skills/hyperframes/references/design-picker.md
index 261566623..66a888b77 100644
--- a/skills/hyperframes/references/design-picker.md
+++ b/skills/hyperframes/references/design-picker.md
@@ -12,17 +12,190 @@ Read these before generating options — they define the rules your options must
- [../visual-styles.md](../visual-styles.md)
- [beat-direction.md](beat-direction.md)
-## Building the picker
+## Creative process (do this BEFORE writing any data)
-1. Generate options **deeply contextual to the user's prompt**. Every category — not just architectures — must reflect the specific product, brand, audience, and mood. Generic options that could appear on any picker are a failure.
+
+Do NOT skip to the data format section. Do NOT start generating architecture HTML. Complete the creative thinking below FIRST. If you jump straight to code, every project will look the same.
+
- **Mood boards** — as many as the creative space warrants (4-8). Every board must tell a different STORY about the brand, not just reshuffle the same elements. Ask: "what are the genuinely different ways to position this product?" A cat food brand might be: playful chaos, premium positioning, comfort/cozy, social-native, flavor showcase, humor-led, sensory/appetizing. Each is a different narrative, not a different font on the same layout.
+### Step 1: What is unique about THIS prompt?
- **Architectures** — one per mood board minimum, each visually distinct. Use `{{prompt_headline}}` and `{{prompt_sub}}` tokens. If the user provided media assets, use them as background images (use `url(path)` without quotes — single quotes inside `style='...'` break the attribute).
+Before generating anything, answer these questions about the specific brief:
- **Palettes** (5-6) — named after the brand's world, not generic moods. The palette names and colors should feel like they belong to THIS specific product. Always mix dark + light + tinted. **Every palette must be visually distinct at swatch size.** If two palettes share the same background lightness AND a similar accent hue, cut one. Test: would a user see the difference in a 14px swatch chip? If not, they're duplicates.
+- **What does this subject LOOK like?** Not generically — specifically. A Santorini travel video has white walls, blue domes, caldera cliffs. A hantavirus PSA has clinical sterility, warning signage, laboratory imagery. A rocket promo has exhaust plumes, mission control grids, countdown typography. The visual vocabulary of the subject should drive the layouts.
+- **What does the target emotion FEEL like as a frame?** Wanderlust is longing — empty space the viewer wants to fill. Urgency is compression — information crowding the viewer. Awe is scale — one element so large it overwhelms.
+- **What does the surface context demand?** A lobby screen is ambient and glanced-at. A YouTube pre-roll is competing for attention. A social story is vertical and fast. The same content looks different in each context.
+- **What does every other video for this subject look like?** That's what you must NOT produce for at least 3 of your 9 boards. If every travel video is a hero image with text overlay, your rare boards shouldn't be hero images with text overlay.
- **Type pairings** (5-6) — **RUN the font discovery script from typography.md BEFORE generating pairings.** This is not optional. Download Google Fonts metadata, run the script, and pick from its output. You will otherwise reach for the same 8 fonts every time (Bricolage Grotesque, Instrument Serif, Fraunces, Archivo Black, DM Serif Display, Space Grotesk, Fredoka) — that's your training data default, not a contextual choice. Match the brand's energy and audience. Cross-category per typography.md (never two sans-serifs).
+### Step 2: Verbalized sampling
+
+Generate 9 video concepts for the brief using chain-of-thought reasoning.
+
+First, produce a single `reasoning` field — a string containing your step-by-step thought process through five genuinely different paths:
+
+1. **From the subject** — what is the visual world of this specific thing?
+2. **From the emotion** — what does the target feeling look like as a layout?
+3. **From the audience** — what does this viewer expect? What would surprise them?
+4. **From the anti-pattern** — what does every competitor do? Do the opposite.
+5. **From an unusual format** — a letter, a recipe, a countdown, a newspaper, a Q&A, a map, a poem. What non-video format could this content take?
+
+Then return the concepts as JSON:
+
+```json
+{
+ "reasoning": "Step-by-step thought process through all 5 paths, generating 9 concepts from them...",
+ "responses": [
+ {
+ "text": "Video concept with layout wireframe description and silhouette of major elements",
+ "probability": 0.35
+ },
+ {
+ "text": "...",
+ "probability": 0.07
+ }
+ ]
+}
+```
+
+9 entries in `responses`. Each `text` describes the video concept including layout wireframe and silhouette of major elements. No extra explanation.
+
+Each `probability` is the estimated probability (0.0–1.0) of this response given the input prompt, relative to the full distribution.
+
+**Sample from the tails:** at least 3 of the 9 concepts MUST have probability < 0.10. These are the rare, unexpected directions that prevent mode collapse. If all 9 score above 0.10, you're reproducing the median — start over.
+
+**Anti-mode-collapse check:** verify no two concepts share the same layout silhouette. Draw the bounding boxes of major elements mentally — if two produce the same shape, replace one.
+
+### Step 3: Generate the data
+
+Now — and only now — expand the 5 concepts into architectures, palettes, type pairings, and mood boards.
+
+**Mood boards** (9) — one per concept. Each tells a different STORY, not a different color scheme on the same layout.
+
+**Architectures** (9) — one per mood board, each a different structural shape. See the structural diversity section below for layout shapes.
+
+**Palettes** (6-9) — named after the subject's world. A Santorini picker has "Aegean Blue", "Caldera Pink", "Oia Cream" — not "Dark Premium" or "Warm Editorial." Mix light + dark + tinted.
+
+**Type pairings** (6-9) — **RUN the font discovery script from typography.md FIRST.** Match the subject's energy.
+
+**Backgrounds** (one per architecture) — each mood board can have an animated 3D gradient background rendered with Three.js. The picker includes built-in presets PLUS 3 prompt-contextual options you generate.
+
+#### Built-in presets
+
+These are baked into the picker template. The user selects one and tweaks its controls:
+
+| Preset | Geometry | Shader | Feel |
+| ------------ | ---------- | -------- | ----------------------------------------- |
+| None | — | — | Flat color, no animation |
+| Halo | plane | defaults | Warm flowing gradient, organic undulation |
+| Mint | waterPlane | defaults | Cool water surface, gentle waves |
+| Cosmic | sphere | cosmic | Holographic nebula, iridescent shimmer |
+| Nighty Night | waterPlane | defaults | Dark moody waves, deep atmosphere |
+| Sunset | sphere | defaults | Warm sphere glow, gentle color blend |
+| Cotton Candy | waterPlane | defaults | Pastel soft waves, light and airy |
+| Glass | waterPlane | glass | Refractive surface, specular highlights |
+| Blob | sphere | defaults | Wiggly organic shape, high displacement |
+
+Each preset has contextual controls (speed, wave height, wobble, etc.) that the user adjusts live.
+
+#### Prompt-contextual backgrounds (generate 3)
+
+In addition to the built-in presets, generate **3 entirely new background styles** — new fragment shaders, new configurations, new control sets. The built-in presets (`defaults`, `cosmic`, `glass`) are templates showing how the Three.js pipeline works. Your 3 backgrounds should be original visual effects that match the prompt's world.
+
+##### How the pipeline works
+
+Each background is a Three.js scene with:
+
+1. **Geometry** — `sphere` (SphereGeometry), `water` or `plane` (PlaneGeometry with subdivisions)
+2. **Vertex shader** — shared across all backgrounds. Applies 3D Perlin noise displacement to vertices. Controlled by `uDensity` (noise scale), `uSpeed` (animation rate), `uStrength` (displacement magnitude).
+3. **Fragment shader** — this is where you create the visual effect. Receives `vPos` (displaced vertex position), `vNormal`, `vViewPos`, `uColor1/2/3` (3 palette colors), `uTime`, `uSpeed`. Must output `gl_FragColor`.
+4. **Camera** — positioned via spherical coordinates (`camAz`, `camPol`, `camDist`). Spheres always use distance 14 with `zoom` controlling magnification. Planes/water use `camDist` directly.
+5. **Controls** — 2-3 sliders the user adjusts live. Each maps to a uniform or mesh property.
+
+##### Writing a custom fragment shader
+
+The fragment shader receives these varyings/uniforms:
+
+```glsl
+varying vec3 vPos; // displaced vertex position in model space
+varying vec3 vNormal; // surface normal (for lighting)
+varying vec3 vViewPos; // view-space position (for fresnel/specular)
+uniform vec3 uColor1, uColor2, uColor3; // 3 palette colors
+uniform float uTime, uSpeed; // animation time + speed
+```
+
+Use `vPos` for spatial color mapping — the gradient pattern comes from mapping colors to vertex positions. Use `vNormal` and `vViewPos` for lighting (diffuse, specular, fresnel). Add custom visual effects: interference patterns, noise-based distortion, view-dependent color shifts, rim glow, etc.
+
+Example — a "coral reef" fragment shader for a tropical video:
+
+```glsl
+void main() {
+ vec3 base = mix(mix(uColor1, uColor2, smoothstep(-3.0, 3.0, vPos.x)),
+ uColor3, smoothstep(-2.0, 2.0, vPos.z));
+ vec3 n = normalize(vNormal);
+ vec3 v = normalize(-vViewPos);
+ // Subsurface scattering — warm glow through the surface
+ float sss = pow(max(0.0, dot(-n, v)), 2.0) * 0.3;
+ vec3 sssColor = mix(uColor2, uColor1, 0.5);
+ // Caustic ripples on the surface
+ float t = uTime * uSpeed;
+ float caustic = abs(sin(vPos.x * 15.0 + t * 2.0) * cos(vPos.y * 12.0 + t * 1.5)) * 0.15;
+ // Lighting
+ vec3 l = normalize(vec3(1, 1, 0.5));
+ float diff = max(dot(n, l), 0.0) * 0.25;
+ float spec = pow(max(dot(n, normalize(l + v)), 0.0), 64.0) * 0.2;
+ float fresnel = pow(1.0 - max(dot(n, v), 0.0), 4.0);
+ vec3 col = base * (0.65 + diff) + vec3(spec) + sssColor * sss + vec3(caustic);
+ col += mix(base, vec3(1), 0.3) * fresnel * 0.15;
+ gl_FragColor = vec4(col, 1.0);
+}
+```
+
+##### Custom preset format
+
+```json
+{
+ "name": "Coral Reef",
+ "desc": "Warm underwater glow with caustic ripples",
+ "type": "water",
+ "shader": "coral_reef",
+ "fragmentShader": "void main() { ... }",
+ "density": 1.4,
+ "speed": 0.15,
+ "strength": 2.8,
+ "rotX": 40,
+ "rotY": 0,
+ "rotZ": 10,
+ "camDist": 4.0,
+ "camAz": 175,
+ "camPol": 75,
+ "zoom": 1,
+ "posX": 0,
+ "posY": 0.5,
+ "posZ": 0,
+ "controls": [
+ { "label": "Speed", "key": "speed", "min": 0.05, "max": 0.5, "step": 0.05 },
+ { "label": "Wave height", "key": "strength", "min": 0.5, "max": 5.0, "step": 0.2 },
+ { "label": "Caustic intensity", "key": "density", "min": 0.5, "max": 3.0, "step": 0.1 }
+ ]
+}
+```
+
+When `fragmentShader` is present, the picker uses it directly instead of looking up `shader` in the built-in library. The `shader` field is still set for the design.md export (agents use it as a label).
+
+##### Guidelines
+
+- **Write new GLSL, don't just reconfigure existing presets.** The visual effect should be unique to the prompt. A PSA video gets a "biohazard pulse" shader with expanding rings and scan lines. A food video gets an "oil shimmer" shader with heat-distorted refraction. A music video gets a "frequency" shader with audio-reactive bands.
+- **Match the subject's physical world.** The shader should feel like looking at the subject's environment at a microscopic or atmospheric level.
+- **Pick geometry by metaphor.** `sphere` = contained, focused. `water` = expansive, environmental. `plane` = flat, graphic.
+- **Name after the prompt's world**, not the technique.
+- **2-3 controls per preset** that expose the parameters most relevant to the visual effect.
+
+Camera rules:
+
+- **sphere**: `camDist` ignored (always 14). `zoom` controls magnification (1 = full sphere, 15+ = surface detail). Higher zoom needs lower `strength`.
+- **water/plane**: `camDist` controls distance. `zoom` always 1. `posY` shifts the mesh vertically.
+
+Append the 3 custom backgrounds to the `BG_PRESETS` array via `BG_PRESETS.push(...)` after the built-in presets. If the preset has a `fragmentShader` field, register it in the `FRAGS` object so the Three.js module can use it.
2. `mkdir -p .hyperframes` then copy [../templates/design-picker.html](../templates/design-picker.html) to `.hyperframes/pick-design.html`.
3. Replace these placeholders using Python (don't hand-escape quotes in sed):
@@ -34,15 +207,27 @@ Read these before generating options — they define the rules your options must
### Architecture data format
-Each architecture object must include a `preview_html` field — the HTML that renders in the preview panel. Use token placeholders that the template replaces at runtime: `{{bg}}`, `{{fg}}`, `{{ac}}`, `{{mt}}`, `{{hf}}`, `{{hw}}`, `{{bf}}`, `{{bw}}`, `{{cr}}` (corner radius), `{{pad}}`, `{{gap}}`, `{{shadow}}`, `{{g}}` (grid line color), `{{fg3}}`/`{{fg6}}`/`{{fg8}}`/`{{fg15}}` (fg at opacity), `{{ac3}}`/`{{ac5}}`/`{{ac25}}` (accent at opacity).
+Each architecture object must include a `preview_frames` field — an array of 4 HTML strings, one per beat (hook, proof, action, close). Each frame renders in the mood board card carousel and in the fine-tune scene grid. The template falls back to `[preview_html]` if `preview_frames` is missing, but this produces a single-frame card with no carousel — always provide 4 frames.
+
+Use token placeholders that the template replaces at runtime: `{{bg}}`, `{{fg}}`, `{{ac}}`, `{{mt}}`, `{{hf}}`, `{{hw}}`, `{{bf}}`, `{{bw}}`, `{{cr}}` (corner radius), `{{pad}}`, `{{gap}}`, `{{shadow}}`, `{{g}}` (grid line color), `{{fg3}}`/`{{fg6}}`/`{{fg8}}`/`{{fg15}}` (fg at opacity), `{{ac3}}`/`{{ac5}}`/`{{ac25}}` (accent at opacity).
-**Every token must be used.** Apply `{{cr}}` to all cards, buttons, and containers. Apply `{{shadow}}` to elevated elements (cards, buttons, code blocks). Apply `{{pad}}` and `{{gap}}` to control spacing. If a token isn't used in the preview_html, that option will have no visible effect.
+**Use tokens everywhere — never hardcode colors, fonts, spacing, or radii.** The preview updates live as the user changes options in Phase 2. If you write `padding: 80px` instead of `padding: {{pad}}`, changing the density option does nothing. If you write `color: #CC2200` instead of `color: {{ac}}`, changing the palette does nothing. Every color, font-family, font-weight, padding, gap, border-radius, and shadow MUST use tokens. The only exception is structural values like `width: 1920px` or `flex: 1`.
-**Density matters.** Each architecture preview must include 15+ distinct elements to give the user a real sense of the layout. Include: headline, subhead, body paragraph, label/overline, stat with number, secondary stat, quote/testimonial, attribution, card with title+body, second card (different treatment), code/command block, primary button, secondary button, list or tags, accent divider/rule, and a data element (table row, progress bar, or chart).
+**Density matches the concept.** The preview should look like a real frame from the actual video — not a UI component showcase. A single-stat direction has 2-3 elements. A data-grid direction has 12+. Match the architecture's intent.
-Optionally include `components` (component styling rules) and `dos` (do's and don'ts) as strings — these appear in the generated design.md.
+**Build preview frames like real compositions.** Read [video-composition.md](video-composition.md), [../house-style.md](../house-style.md), and [motion-principles.md](motion-principles.md) when generating preview frame HTML. The frames are static but should look like they were paused mid-animation:
-**Layout constraint:** All preview HTML must use percentage widths or `max-width: 100%`. Use `flex-wrap: wrap` on all flex rows. Absolute-positioned decoratives must stay within a parent with `overflow: hidden`.
+- Apply accent color to exactly the focal element per frame
+- Use video-scale typography (80-140px heroes, 24-36px body, 14-20px labels)
+- Respect the density philosophy — high-energy frames earn more elements, contemplative frames earn fewer
+- Each of the 4 frames should represent a distinct beat: hook (grab attention), proof (deliver the message), action (what to do), close (final frame)
+- Use real content from the prompt, not "Headline Goes Here" placeholders
+
+The user judges the entire video based on these 4 frames. If they look generic, the user assumes the video will too.
+
+Optionally include `components` and `dos` as strings — these appear in the generated design.md.
+
+**Layout constraint:** Each frame is 1920×1080px. Content can use flex, grid, absolute positioning — any CSS that works in inline styles. Avoid `max-width: 100%` on frame content (the frame IS 1920px).
**Security:** Architecture `preview_html` must not contain `
+
+
\n';
+ if (bgpObj) {
+ h += '
+
+
+