diff --git a/skills/competitor-analysis/.gitignore b/skills/competitor-analysis/.gitignore
new file mode 100644
index 0000000..d4fcb2d
--- /dev/null
+++ b/skills/competitor-analysis/.gitignore
@@ -0,0 +1,2 @@
+profiles/*.json
+!profiles/example.json
diff --git a/skills/competitor-analysis/SKILL.md b/skills/competitor-analysis/SKILL.md
new file mode 100644
index 0000000..940f545
--- /dev/null
+++ b/skills/competitor-analysis/SKILL.md
@@ -0,0 +1,403 @@
+---
+name: competitor-analysis
+description: |
+ Competitor research and intelligence skill. Takes a user's company (with optional
+ seed competitor URLs), auto-discovers additional competitors via Browserbase Search API,
+ deeply researches each using a 4-lane pattern (marketing surface, external signal,
+ public benchmarks, strategic diff vs the user's company), and compiles the results
+ into an HTML report with four views: overview, per-competitor deep dive, side-by-side
+ feature/pricing matrix, and a chronological mentions feed (benchmarks, comparison
+ pages, news, Reddit, HN, LinkedIn posts, YouTube videos, reviews).
+ Use when the user wants to: (1) analyze competitors, (2) build a competitive matrix,
+ (3) extract competitor pricing / features, (4) find comparison pages and online
+ mentions of competitors, (5) surface public benchmarks. Triggers: "competitor analysis",
+ "analyze competitors", "competitive intel", "competitor research", "competitor pricing",
+ "feature comparison", "price comparison", "find comparisons", "who's comparing us",
+ "competitor mentions", "competitor benchmarks".
+license: MIT
+compatibility: Requires bb CLI (@browserbasehq/cli) and BROWSERBASE_API_KEY env var
+allowed-tools: Bash Agent AskUserQuestion
+metadata:
+ author: browserbase
+ version: "0.2.0"
+---
+
+# Competitor Analysis
+
+Analyze a user's competitors. Uses Browserbase Search API for discovery and a 4-lane Plan→Research→Synthesize pattern for enrichment — outputting an HTML report with overview, per-competitor deep dives, a side-by-side feature/pricing matrix, and a chronological mentions feed.
+
+**Required**: `BROWSERBASE_API_KEY` env var and `bb` CLI installed.
+
+**First-run setup**: On the first run you'll be prompted to approve `bb fetch`, `bb search`, `cat`, `mkdir`, `sed`, etc. Select **"Yes, and don't ask again for: bb fetch:\*"** (or equivalent) for each. To permanently approve, add these to your `~/.claude/settings.json` under `permissions.allow`:
+```json
+"Bash(bb:*)", "Bash(bunx:*)", "Bash(bun:*)", "Bash(node:*)",
+"Bash(cat:*)", "Bash(mkdir:*)", "Bash(sed:*)", "Bash(head:*)", "Bash(tr:*)", "Bash(rm:*)"
+```
+
+**Path rules**: Always use full literal paths in Bash — NOT `~` or `$HOME`. Resolve the home directory once and use it everywhere. When building subagent prompts, replace `{SKILL_DIR}` with the full literal path.
+
+**Output directory**: All output goes to `~/Desktop/{company_slug}_competitors_{YYYY-MM-DD}/`. This directory contains one `.md` file per competitor plus the generated HTML views and CSV.
+
+**CRITICAL — Tool restrictions (applies to main agent AND all subagents)**:
+- All web searches: use `bb search`. NEVER WebSearch.
+- All page fetches: use `bb fetch --allow-redirects`. NEVER WebFetch. Pipe through `sed ... | tr -s ' \n'` to extract text. 1 MB response limit — fall back to `bb browse` for JS-heavy pages.
+- All research output: subagents write **one markdown file per competitor** to `{OUTPUT_DIR}/{competitor-slug}.md` using bash heredoc. NEVER use the Write tool or `python3 -c`. See `references/example-research.md` for the file format.
+- Report compilation: use `node {SKILL_DIR}/scripts/compile_report.mjs {OUTPUT_DIR} --user-company "{user_company}" --open` — generates `index.html`, `competitors/*.html`, `matrix.html`, `mentions.html`, `results.csv` in one step and opens overview.
+- URL deduplication: `node {SKILL_DIR}/scripts/list_urls.mjs /tmp --prefix competitor`.
+- **Subagents must use ONLY the Bash tool.**
+- **Main agent NEVER reads raw discovery JSON batch files.**
+
+**CRITICAL — Minimize permission prompts**:
+- Subagents MUST batch ALL file writes into a SINGLE Bash call using chained heredocs.
+- Batch ALL searches and ALL fetches into single Bash calls via `&&` chaining.
+
+## Pipeline Overview
+
+Follow these 8 steps in order. Do not skip or reorder.
+
+1. **User Company Research** — Deeply understand the user's company, produce `precise_category` + `category_include_keywords` + `exclusion_list`
+2. **Depth Mode + Seed Input** — Choose depth, accept optional seed competitor URLs
+3. **Discovery (3 parallel waves)** — Wave A (alternatives), Wave B (precise category), Wave C (comparison-page graph via "X vs Y" title parsing)
+4. **Gate** — `scripts/gate_candidates.mjs` bb-fetches each candidate's hero text and drops wrong-category URLs
+5. **Confirm enrichment set with the user** — Present PASS / UNKNOWN / rejected-brand-matches via `AskUserQuestion`. User ticks the real ones, adds any the discovery missed. Skipping this step is wasteful because enrichment is expensive (25 subagents × depth budget) and the gate is imperfect (JS-heavy homepages, Cloudflare challenges, semantic-variant taglines)
+6. **Deep Enrichment (5 subagents per competitor in deep/deeper modes)** — Marketing, Discussion, Social, News, Technical — each lane a separate subagent writing to `partials/`; then `merge_partials.mjs` consolidates. In deep/deeper modes, **Step 5d** adds a 6th Battle Card synthesis lane AFTER Step 5c fact-check completes — produces per-competitor Landmines / Objection Handlers / Talk Tracks grounded in cited evidence.
+7. **Screenshots** — `capture_screenshots.mjs` via the `browse` CLI captures a 1280×800 homepage hero per competitor
+8. **HTML Report** — Overview + per-competitor (with embedded hero screenshot + Battle Card card) + matrix + mentions views
+
+---
+
+## Step 0: Setup Output Directory
+
+```bash
+OUTPUT_DIR=~/Desktop/{company_slug}_competitors_{YYYY-MM-DD}
+mkdir -p "$OUTPUT_DIR"
+```
+
+Replace `{company_slug}` with the user's company name (lowercase, hyphenated) and `{YYYY-MM-DD}` with today's date. Pass `{OUTPUT_DIR}` as a full literal path to every subagent.
+
+Clean up discovery batch files from prior runs:
+```bash
+rm -f /tmp/competitor_discovery_batch_*.json
+```
+
+## Step 1: User Company Research
+
+This step sets the baseline for what "competitor" means AND produces the verified data the Step 5b matrix will use for the `userCompany` row.
+
+**Rule**: The user's company gets the same 5-lane research depth as competitors. Do NOT fill `userCompany` in matrix.json from memory — it will ship false claims to the user's own team. On the Browserbase run 2026-04-23, skipping this step produced a matrix that claimed Browserbase had a "published uptime SLA" (there is no numeric public SLA — only a status page) and marked Stagehand's MIT-licensed OSS SDK as `open-source: false` (the repo is github.com/browserbase/stagehand, LICENSE confirmed MIT). Both errors would have surfaced in the "Where you're winning" card as fabricated moats.
+
+Process:
+
+1. Ask the user for their company name or URL.
+
+2. **Check for an existing profile** at `{SKILL_DIR}/profiles/{company-slug}.json`. If it exists, load it and confirm with the user: "I have your profile from {researched_at}. Still accurate?" — if yes, skip to Step 2 BUT still run the partial-lane enrichment below so matrix synthesis has fresh feature evidence.
+ The profile format is shared with `company-research` (same shape). If a user already has a profile saved under `company-research/profiles/`, you may copy it into this skill's profiles directory rather than re-researching.
+
+3. **Run the full 5-lane enrichment on the user's company** — identical to the competitor pattern in Step 5. For each lane, spawn a Bash-only subagent that writes to `{OUTPUT_DIR}/partials/{user-slug}.{lane}.md`:
+ - **marketing** — tagline, positioning, pricing tiers, features, integrations, open-source components (SDK repos + licenses), regions offered, compliance (SOC 2 / HIPAA / trust portal URL)
+ - **technical** — CDP / Playwright / Puppeteer / Selenium driver support (with docs URLs), SDK languages, MCP server URL, stealth product name + tier, session replay + video recording specifics, published uptime SLA (actual %, not status page), third-party benchmarks
+ - **discussion**, **social**, **news** — optional in quick mode, recommended in deep+
+ See `references/research-patterns.md` → "Self-Research" for sub-questions. Each finding MUST cite a URL.
+
+4. Run `merge_partials.mjs` on the user's partials too — produces `{OUTPUT_DIR}/{user-slug}.md`, the canonical source Step 5b reads from for `userCompany` flags.
+
+5. Synthesize into a profile: Company, Product, Existing Customers, Competitors (seed list), Use Cases, **precise_category**, **category_include_keywords**, **exclusion_list**. Do NOT include ICP — this skill doesn't need it.
+ - `precise_category`: one sentence describing the category. e.g., "cloud headless browser infrastructure for AI agents with CDP". Avoid vague words like "tools" / "platform".
+ - `category_include_keywords`: 8-15 phrases a direct competitor's marketing would likely contain (hero or title). Include semantic variants.
+ - `exclusion_list`: phrases that indicate a *different* category — used by the gate to reject false positives (e.g. `antidetect browser`, `scraping api`, `screenshot api`, `residential proxy`).
+ See `references/research-patterns.md` → "Synthesis Output" for the exact format and Browserbase as a worked example.
+
+6. Present the profile + the user-company `.md` to the user for confirmation. Do not proceed until confirmed.
+
+7. **Save the confirmed profile** to `{SKILL_DIR}/profiles/{company-slug}.json`.
+
+## Step 2: Depth Mode + Seed Input
+
+Ask clarifying questions via `AskUserQuestion` with checkboxes:
+- **Known competitors?** Text area for URLs/names (optional — discovery will find more).
+- **Depth mode?**
+ - `quick` — marketing surface only, many competitors, ~2-3 tool calls each
+ - `deep` — + external signal (mentions, reviews, news), ~5-8 tool calls each
+ - `deeper` — + public benchmarks + strategic diff vs user's company, ~10-15 tool calls each
+- **Target count?** Rough number of competitors to research (e.g., 10 / 20 / 50).
+
+This is the ONLY user interaction. After this, execute silently until the report is ready.
+
+| Mode | Research per competitor | Best for |
+|------|--------------------------|----------|
+| `quick` | Lane 1 only (homepage + pricing) | Scanning ~30-50 competitors fast |
+| `deep` | Lanes 1+2 | ~15-25 competitors with external signal |
+| `deeper` | All 4 lanes (+ benchmarks + strategic diff) | ~5-15 competitors with full intel |
+
+## Step 3: Discovery (3 parallel waves)
+
+**Formula**: `ceil(target_count / 20)` queries per wave. Over-discover ~3x because the gate drops ~40-60%.
+
+Evaluation on Browserbase shows all three waves are additive — skip any and you lose real competitors:
+
+**Wave A — Generic alternatives** (broad; heavy aggregator noise, filtered out later)
+- `"alternatives to {user_company}"`
+- `"{user_company} competitors"`
+
+**Wave B — Precise category** (uses `precise_category` from the profile)
+- `"{precise_category}"` verbatim
+- 2-3 queries composed from the most distinctive tokens (e.g. `"cloud browser for ai agents"`, `"browser infrastructure API"`)
+
+**Wave C — Comparison-page graph** (highest precision)
+- `"{user_company} vs"`
+- `"{seed1} vs"`, `"{seed2} vs"`, `"{seed3} vs"` (seeds from the profile's `competitors` list)
+- After the searches, run `scripts/extract_vs_names.mjs` to parse `"X vs Y"` patterns from result titles — this uniquely surfaces competitors that don't appear as URL hits.
+
+**Process**:
+1. Issue **3 parallel `bb search` Bash calls** (one per wave) in a SINGLE message — NOT subagents. Each Bash call chains its 2-4 queries with `&&`. See `references/workflow.md` → "Discovery — parallel Bash, not subagents" for the exact recipe. Subagents are too heavy for a workload of 6-12 `bb search` calls.
+2. After all waves complete:
+ ```bash
+ node {SKILL_DIR}/scripts/list_urls.mjs /tmp --prefix competitor > /tmp/competitor_urls.txt
+ node {SKILL_DIR}/scripts/extract_vs_names.mjs /tmp --prefix competitor \
+ --seed "{user_company},{seed1},{seed2},{seed3}" \
+ > /tmp/competitor_vs_names.jsonl
+ ```
+3. **Filter** `/tmp/competitor_urls.txt` — remove blog posts, news, AI-tool directories (seektool.ai, respan.ai, agentsindex.ai, toolradar.com, aitoolsatlas.ai, vibecodedthis.com, etc.), review aggregators (g2.com, capterra.com), databases (crunchbase.com, tracxn.com), user's own domain. See `references/workflow.md` for the full noise-domain list.
+4. For `vs_names` entries that have a resolved `domain`, add them. For unresolved names, optionally run `bb search "{name}" --num-results 3` and pick the top root domain.
+5. Merge with user-provided seed URLs. Dedup by hostname → `/tmp/competitor_candidates.txt`.
+
+## Step 4: Gate (category-fit filter)
+
+Drop candidates whose marketing identifies them as a *different* category before enrichment burns tool calls on them.
+
+```bash
+cat /tmp/competitor_candidates.txt \
+ | node {SKILL_DIR}/scripts/gate_candidates.mjs \
+ --include "{profile.category_include_keywords joined with commas}" \
+ --exclude "{profile.exclusion_list joined with commas}" \
+ --concurrency 6 \
+ > /tmp/competitor_gated.jsonl
+
+grep '"status":"PASS"' /tmp/competitor_gated.jsonl \
+ | node -e 'require("fs").readFileSync(0,"utf-8").split("\n").filter(Boolean).forEach(l => { try { console.log(JSON.parse(l).url); } catch {} })' \
+ > /tmp/competitor_passed.txt
+```
+
+The gate fetches each candidate's homepage via `bb fetch --allow-redirects`, extracts the first 800 chars of visible text, and classifies position-aware: exclude in `
` → REJECT; include in `` → PASS; hybrid title → hero200 tiebreak; otherwise fall through.
+
+**Evaluated on Browserbase** with 12 mixed candidates: 7/7 real competitors passed, 4/4 wrong-category rejected, 1 known-hybrid edge case rejected.
+
+## Step 4.5: Confirm enrichment set with the user
+
+**This step is mandatory. Do NOT skip to enrichment just because the gate ran.**
+
+Enrichment is expensive: 5 competitors × 5 lane-subagents = 25 subagents, ~10-15 minutes of wall clock, ~300 `bb` calls. Running it on the wrong set wastes all of that. The gate also has known blind spots:
+
+- **JS-heavy homepages** (e.g. Tavily, Firecrawl) — `bb fetch` returns near-empty text, so keyword matching has nothing to match on → REJECT or UNKNOWN
+- **Cloudflare challenge pages** (e.g. Perplexity) — title becomes "Just a moment..." → no category signal
+- **Semantic variants** — "search foundation" / "retrieval backbone" don't lexically match a list centered on "search API"
+- **Domain ambiguity** — `brave.com` (the browser) vs `api-dashboard.search.brave.com` (the actual API product) can confuse classification
+
+The user almost always has domain knowledge the skill lacks. Ask them.
+
+**Process** — the main agent:
+
+1. Read `/tmp/competitor_gated.jsonl` and group rows:
+ - **PASS bucket**: everything with status=PASS.
+ - **UNKNOWN bucket**: status=UNKNOWN (fetch failed — always surface, these are the silent misses).
+ - **Rejected-brand bucket**: top ~10 REJECT rows whose title mentions a well-known brand pattern (e.g. contains the token from a user-supplied seed list, or appears frequently in the Wave C "X vs Y" graph).
+
+2. Present the buckets to the user, one table per bucket, with URL + title + reason (for rejects).
+
+3. Use `AskUserQuestion` with a checkbox list of all candidates across the three buckets, plus a free-text "add more" field. The prompt should be explicit:
+ > "Here are the gate's picks plus a few it was unsure about. Tick the ones that are real competitors in your space, and paste any URLs I missed (comma-separated). Enrichment will run on ONLY the ticked set."
+
+4. Write the confirmed set to `/tmp/competitor_enrichment_set.txt` (one URL per line). This is the input for Step 5 — not `/tmp/competitor_passed.txt`.
+
+**If the user doesn't respond** or explicitly says "just run it", fall back to `/tmp/competitor_passed.txt` as-is, but warn in chat that the run may waste budget on wrong-category hits.
+
+**Exa test, 2026-04-24**: gate auto-passed 22 of 101 candidates but missed Tavily (generic title), Jina AI (semantic mismatch — "search foundation"), Firecrawl (JS-heavy fetch failure), and Perplexity (Cloudflare challenge). All four are real direct competitors. This step catches them.
+
+## Step 5: Deep Enrichment
+
+Two modes. See `references/workflow.md` for prompt templates and wave management. See `references/research-patterns.md` for the lane-by-lane methodology.
+
+### Quick mode — single subagent per batch
+- Input: `/tmp/competitor_enrichment_set.txt` (user-confirmed set from Step 4.5), ~8 competitors per subagent.
+- One subagent runs Lane A only (marketing surface). 2-3 tool calls each.
+- Writes directly to `{OUTPUT_DIR}/{slug}.md`.
+
+### Deep / Deeper mode — 5 subagents PER competitor (parallel lane fan-out)
+For each competitor, launch 5 parallel subagents, one per lane:
+- **A. Marketing** (`marketing`): pricing, features, positioning, integrations, customers, team, funding, HQ. Owns canonical frontmatter.
+- **B. Discussion** (`discussion`): Reddit, HN, forums, Dev.to, Hashnode. Broad queries beyond `site:` — also `"{competitor}" review 2026`, `"{competitor}" issues OR problems`, `"{competitor}" discussion`.
+- **C. Social** (`social`): LinkedIn posts, YouTube videos, Twitter/X. Snippets only — do NOT fetch.
+- **D. News & Comparisons** (`news`): TechCrunch, Verge, VentureBeat, Forbes, Businesswire, Substack, blog reviews. Every mention needs a date.
+- **E. Technical & Benchmarks** (`technical`): GitHub benchmark repos/PRs, performance posts. Writes Benchmarks + technical Findings.
+
+Budget per lane: deep = 5-8 tool calls, deeper = 10-15.
+**Launch ALL competitor × lane subagents in a SINGLE Agent tool message.** For 10 competitors × 5 lanes = 50 parallel Agent calls in one message. Do NOT split into batches per competitor or per lane — wall clock collapses to the slowest single agent (~3-5 min). Splitting into 5 rounds of 10 cost 25 minutes of wall clock vs 5 minutes parallel on a real measured run; do not do it.
+
+Each subagent writes a partial to `{OUTPUT_DIR}/partials/{slug}.{lane}.md`.
+
+**Critical**: Pass the user's company name, product, and key features verbatim into every subagent prompt so the technical lane can do strategic diffing. Pass the full literal `{OUTPUT_DIR}` path to every subagent.
+
+### Merge partials → canonical per-competitor file
+After all subagents for all competitors complete:
+```bash
+node {SKILL_DIR}/scripts/merge_partials.mjs {OUTPUT_DIR}
+```
+Unions the 5 partials per competitor into one `{OUTPUT_DIR}/{slug}.md` — dedup'd Mentions (sorted by date desc), dedup'd Benchmarks, merged Findings, canonical frontmatter from the marketing lane.
+
+### Synthesize the comparison matrix (write `matrix.json`)
+
+**Subagents write `key_features` and `integrations` as prose**, not as pipe-separated atomic feature labels. So a naive `|`-split axis becomes one-blob-per-competitor with no overlap — the rendered matrix shows a useless diagonal.
+
+The main agent fixes this by synthesizing a **shared taxonomy** across competitors and writing `{OUTPUT_DIR}/matrix.json`. `compile_report.mjs` auto-detects this file and renders the matrix from it instead of from the pipe split.
+
+**Process** — main agent:
+1. Read ALL `{slug}.md` files, INCLUDING the user's company file `{user-slug}.md` produced in Step 1. The user is competitor #0 for matrix purposes — treat with identical rigor.
+2. Produce a canonical list of 12-20 *atomic* features — each must be a yes/no proposition a competitor either has or doesn't (e.g. "MCP server", "SOC 2", "Site crawler", "Reranker"). Avoid sentence-length features. Avoid features only one competitor has.
+3. Produce a canonical list of 10-20 integrations (frameworks, marketplaces, SDK languages).
+4. For each company INCLUDING THE USER, map each taxonomy entry to `true` / `false` based on the enrichment data in their `.md` file. **Every flag must be traceable to a Research Findings bullet with a cited URL.** If the user's file says "Stagehand MIT-licensed (github.com/browserbase/stagehand)", the Open-source feature is `true` with that URL as the source. If not mentioned, leave `false`.
+5. Write the result to `{OUTPUT_DIR}/matrix.json` in this shape:
+ ```json
+ {
+ "category": "AI search APIs",
+ "features": [{ "name": "Web Search API", "description": "..." }, ...],
+ "integrations": [{ "name": "LangChain" }, ...],
+ "userCompany": {
+ "name": "Exa",
+ "winningSummary": "Exa's moats are its first-party neural index and the integrated Research API — no one else in the set ships a semantic/embeddings-native retrieval primitive alongside a multi-step agentic research endpoint. It's also the only provider with a crawler product bundled in, and ties with SerpAPI on breadth of SDK language coverage.",
+ "losingSummary": "Exa trails competitors on operational transparency — SerpAPI, Serper, and Tavily all publish hourly throughput SLAs, and Exa lacks a dedicated news endpoint that SerpAPI, Serper, and You.com all ship. Image/visual search is also missing vs 4 of 5 competitors.",
+ "features": { "Web Search API": true, "Site crawler": true, ... },
+ "integrations": { "LangChain": true, ... }
+ },
+ "competitors": {
+ "tavily": {
+ "features": { "Web Search API": true, "Site crawler": true, ... },
+ "integrations": { "LangChain": true, "Databricks Marketplace": true, ... }
+ },
+ "serpapi": { "features": {...}, "integrations": {...} }
+ }
+ }
+ ```
+
+ **`userCompany` is required**. The overview page renders two cards — "Where {user} is winning" and "Where {user} is losing". Populate `userCompany.features` and `userCompany.integrations` from the self-research profile (Step 1). Without this field those two cards don't render.
+
+ **`userCompany.winningSummary` / `losingSummary` are strongly preferred** (analyst-style prose, 2-4 sentences each). When present, the cards render as paragraphs instead of bulleted lists — reads like a briefing, not a spreadsheet. Write these AFTER the fact-check step below so prose is grounded in verified cells, not raw inference. If absent, the cards fall back to a bulleted list of winning/losing items with who-else-has-it.
+
+If this step is skipped, the matrix view falls back to the raw pipe-split axis (useless for atomic comparison) and the strategic summary doesn't render. Do not skip.
+
+### Fact-check the matrix — spot-check the high-stakes cells (default)
+
+**Do not trust the taxonomy pass alone for high-stakes cells.** It is LLM inference from prose and will hallucinate moats. Observed during Browserbase run 2026-04-23: matrix.json claimed SOC 2 was unique to Browserbase; verification showed Hyperbrowser, Kernel, and Anchor Browser all have SOC 2 Type II.
+
+But verifying every cell is the opposite mistake. A 7-company × 33-axis matrix has 231 cells. The Apr 2026 Browserbase run got stuck at 111+ tool calls in fact-check before interrupt — the subagent kept going on table-stakes cells (Playwright support, CDP, Python SDK) that are universal in the category.
+
+**Default = spot-check, not full sweep.** Only verify cells that meaningfully change the strategic narrative.
+
+Launch a single fact-check subagent (Bash-only) with **a hard 25-call budget** that targets ONLY these high-stakes axes:
+
+1. **Every `userCompany.features` and `userCompany.integrations` cell** (the user's own moats — these go straight into "Where you're winning" prose). Typical: 17 + 16 = 33 cells, but most are obvious (your own product). Focus on:
+ - Anything claimed as a *moat* in `winningSummary`
+ - Anything claimed as a *gap* in `losingSummary`
+ - Compliance (SOC 2, HIPAA, ISO 27001, GDPR)
+ - Open-source license claims (MIT / Apache 2.0 / AGPL — observed wrong on Steel)
+ - Published uptime SLA (status page ≠ SLA)
+
+2. **Across competitors, only the cells that drive the win/loss summary**:
+ - For each "Winning" claim, verify the user has it AND verify the competitors don't.
+ - For each "Losing" claim, verify the named competitors do have it.
+ - Compliance + license + SLA across all competitors (high-trust, frequently wrong).
+
+3. **Do NOT verify**:
+ - Universal table-stakes (Playwright, Puppeteer, CDP, Python SDK) — every cloud browser has these.
+ - `false` cells with no claim being made (no moat lost or won).
+ - Integration cells unless they appear in the win/loss summary.
+
+```
+You are a matrix spot-check subagent. Budget: 25 bb calls TOTAL across all cells.
+Stop and return what you have when you hit the budget — partial fact-check is
+better than blocking the rest of the pipeline.
+
+TOOL RULES: Bash ONLY. bb search + bb fetch. Count your calls; stop at 25.
+
+PRIORITY ORDER (highest-stakes first — work down until budget):
+1. Every cell that appears in userCompany.winningSummary or losingSummary
+2. Compliance cells (SOC 2, HIPAA, ISO 27001) for user + every competitor
+3. Open-source / self-hostable + license cells across all competitors
+4. Pricing tier numbers ($X/mo, /hr) for user + competitors named in summaries
+5. Funding / employee_estimate fields (only if cited in summaries)
+
+Skip:
+- Universal cells (Playwright, Puppeteer, CDP, Python SDK, etc.)
+- `false` cells where no claim is being made
+- Integration matrix cells unless they appear in summaries
+
+For each cell verified:
+- If `true` — find one source URL (docs, trust portal, GitHub LICENSE, etc).
+- If `false` — one targeted bb search. Flip ONLY on first-party evidence.
+
+Output: matrix.json with `sources: { "Feature": "https://..." }` on the
+verified cells (other cells stay as-is). Cells-changed log to
+{OUTPUT_DIR}/matrix_fact_check.md with each flip + URL + quoted evidence.
+Report back: "spot-check: N cells verified, M flipped, B/25 budget used".
+```
+
+**Full-sweep mode (opt-in, slower)**: if the user explicitly says "full fact check" or for a high-stakes deliverable (board deck, press release), set the budget to 80 calls and verify every non-universal cell. Default is spot-check.
+
+After the subagent completes, re-read matrix.json, recompile, and surface `matrix_fact_check.md` delta to the user. The summary is much more trustworthy with spot-check than without — and ships in 3-5 minutes instead of stalling the pipeline.
+
+### Step 5d: Battle Card synthesis (deep/deeper only, after Step 5c)
+
+**Depends on fact-checked matrix.json from Step 5c.** This is a sales-enablement lane. For each competitor, launch a Bash-only synthesis subagent (no new `bb` calls) that reads all 5 existing partials + the user's merged `.md` + fact-checked `matrix.json`, and produces per-competitor Landmines / Objection Handlers / Talk Tracks grounded in cited evidence.
+
+Prompt template: `references/battle-card-subagent.md` (substitute `{COMPETITOR_SLUG}` / `{COMPETITOR_NAME}` / `{USER_COMPANY_NAME}` / `{USER_WINNING_SUMMARY}` per competitor). Format spec: `references/battle-card.md`.
+
+Output: `{OUTPUT_DIR}/partials/{slug}.battle.md` with a `## Battle Card` section. `merge_partials.mjs` unions this into the consolidated `{slug}.md`. `compile_report.mjs` renders it as a brand-accented card on the per-competitor HTML page.
+
+**Why this lane is synthesis-only** — battle cards must be grounded in facts that already survived Step 5c. Letting the subagent do fresh `bb` searches would reintroduce the hallucinated-moat problem the fact-check step exists to prevent. The subagent's adversarial self-check explicitly rejects claims not traceable to an input partial bullet or a `sources`-backed matrix cell.
+
+Parallelism: 1 subagent per competitor, all in one Agent-tool message (synthesis is fast, ~3-5 Bash calls per subagent). Skip this step in `quick` mode — there isn't enough research depth to ground the cards credibly.
+
+## Step 6: Screenshots
+
+Capture a homepage hero screenshot per competitor:
+```bash
+node {SKILL_DIR}/scripts/capture_screenshots.mjs {OUTPUT_DIR} --env remote
+```
+
+Uses the `browse` CLI — a separate package from `bb` (`npm install -g @browserbasehq/browse-cli`). Connects to a Browserbase remote session by default. Writes one PNG per competitor to `{OUTPUT_DIR}/screenshots/{slug}-hero.png`. The compile step in Step 7 auto-embeds the hero on each per-competitor HTML page.
+
+Cost: ~10-20s per competitor. ~60s for 5 competitors.
+
+## Step 7: HTML Report
+
+1. **Generate all views + CSV** (opens overview in browser):
+ ```bash
+ node {SKILL_DIR}/scripts/compile_report.mjs {OUTPUT_DIR} --user-company "{user_company}" --open
+ ```
+ Produces:
+ - `{OUTPUT_DIR}/index.html` — overview: competitor table with tagline, pricing summary, key features, strategic diff
+ - `{OUTPUT_DIR}/competitors/{slug}.html` — per-competitor deep dive (all sections)
+ - `{OUTPUT_DIR}/matrix.html` — side-by-side feature/pricing matrix
+ - `{OUTPUT_DIR}/mentions.html` — chronological feed with source-type pills + client-side filter
+ - `{OUTPUT_DIR}/results.csv` — flat spreadsheet
+
+2. **Present a chat summary**:
+
+```
+## Competitor Analysis Complete
+
+- **Competitors researched**: {count}
+- **Depth mode**: {mode}
+- **Mentions collected**: {total mentions} across {source types count} source types
+- **Public benchmarks found**: {count}
+- **Opened in browser**: ~/Desktop/{company_slug}_competitors_{date}/index.html
+```
+
+3. Show the **overview table** in chat:
+
+```
+| Competitor | Positioning | Pricing | Key Features | Strategic Diff |
+|------------|-------------|---------|--------------|----------------|
+| Rival Co | AI-native headless browser | $99/mo entry | stealth, proxies, CAPTCHA | Similar infra; cheaper entry |
+```
+
+4. Call out the top 3-5 most interesting findings — e.g., "3 competitors have public benchmarks; Rival Co is cheapest; Foo Inc launched a session-replay feature 2 weeks ago." Offer to dig deeper into any specific competitor or re-run with different depth.
diff --git a/skills/competitor-analysis/references/battle-card-subagent.md b/skills/competitor-analysis/references/battle-card-subagent.md
new file mode 100644
index 0000000..b367915
--- /dev/null
+++ b/skills/competitor-analysis/references/battle-card-subagent.md
@@ -0,0 +1,127 @@
+# Battle Card subagent prompt
+
+## Contents
+- [Placeholders to substitute](#placeholders-to-substitute) — `{OUTPUT_DIR}`, `{COMPETITOR_SLUG}`, etc.
+- [Prompt](#prompt) — full subagent instruction template (paste with placeholders filled in)
+- [Wave management](#wave-management) — launch policy: one Agent message per run, all competitors in parallel
+
+Main agent substitutes placeholders per competitor. Launch AFTER Step 5c fact-check completes — this lane depends on `matrix.json` cells having `sources` URLs.
+
+## Placeholders to substitute
+
+- `{OUTPUT_DIR}` → full literal path, e.g. `/Users/jay/Desktop/browserbase_competitors_2026-04-24-1930`
+- `{COMPETITOR_SLUG}` → e.g. `hyperbrowser`
+- `{COMPETITOR_NAME}` → e.g. `Hyperbrowser`
+- `{USER_SLUG}` → e.g. `browserbase`
+- `{USER_COMPANY_NAME}` → e.g. `Browserbase`
+- `{USER_PRODUCT_ONE_LINER}` → pulled from Step 1 profile
+- `{USER_WINNING_SUMMARY}` → matrix.json `userCompany.winningSummary`
+- `{USER_LOSING_SUMMARY}` → matrix.json `userCompany.losingSummary`
+
+## Prompt
+
+```
+You are the Battle Card synthesis subagent. Produce an evidence-grounded
+battle card a real AE would use on a call.
+
+TOOL RULES — CRITICAL, FOLLOW EXACTLY:
+1. You may ONLY use the Bash tool. No exceptions.
+2. BANNED TOOLS: WebFetch, WebSearch, Write, Read, Glob, Grep, bb search,
+ bb fetch — ALL BANNED. This is a SYNTHESIS lane, not a research lane.
+ You read files that already exist; you do not make new network calls.
+3. Read ALL inputs in ONE Bash call via `cat`. Write output in ONE heredoc.
+4. NEVER use ~ or $HOME — full literal paths only.
+
+INPUTS (all already exist on disk — read in one Bash call):
+- {OUTPUT_DIR}/partials/{COMPETITOR_SLUG}.marketing.md
+- {OUTPUT_DIR}/partials/{COMPETITOR_SLUG}.discussion.md
+- {OUTPUT_DIR}/partials/{COMPETITOR_SLUG}.social.md
+- {OUTPUT_DIR}/partials/{COMPETITOR_SLUG}.news.md
+- {OUTPUT_DIR}/partials/{COMPETITOR_SLUG}.technical.md
+- {OUTPUT_DIR}/{USER_SLUG}.md # user's own merged file
+- {OUTPUT_DIR}/matrix.json # fact-checked matrix — cells
+ # must have a `sources` URL to
+ # be trustworthy; reject any
+ # cell without one
+
+CONTEXT:
+- User's company: {USER_COMPANY_NAME}
+- User's product: {USER_PRODUCT_ONE_LINER}
+- User's verified moats (from matrix.json userCompany.winningSummary):
+ {USER_WINNING_SUMMARY}
+- User's verified gaps (from matrix.json userCompany.losingSummary):
+ {USER_LOSING_SUMMARY}
+- Competitor: {COMPETITOR_NAME}
+- Competitor slug: {COMPETITOR_SLUG}
+
+TASK — produce three sections, every claim traceable to an input bullet
+or matrix.sources URL:
+
+1. LANDMINES (3-5 items) — concrete verifiable facts that HURT
+ {COMPETITOR_NAME} in a deal. Each:
+ - States a specific, verifiable fact (not "they're slow" — "their
+ p50 was 3.4s on the Nov 2025 Halluminate benchmark")
+ - Cites a source URL pulled from an actual bullet in one of the
+ input partials (Mentions / Benchmarks / Research Findings)
+ - Includes a one-line "how to use it" talking point
+ - Prefers third-party sources over competitor's own marketing
+ - If no evidence exists for a potential landmine, OMIT it. 3 cited
+ landmines > 5 half-invented ones.
+
+2. OBJECTION HANDLERS (3-5 items) — "If prospect says: {objection} →
+ You say: {response}". Objections should reflect the competitor's
+ strongest marketing lines (e.g. if their homepage says "99.99%
+ uptime", the objection is "we hear {user} has no uptime guarantee").
+ Responses must reference a real user moat from winningSummary —
+ never a hallucinated feature.
+
+3. TALK TRACKS (2-3 items) — 1-2 sentence opening pitches. Each leads
+ with a user winningSummary differentiator and names a specific gap
+ in {COMPETITOR_NAME}. Confident, factual, no hyperbole.
+
+ADVERSARIAL SELF-CHECK before writing:
+- [ ] Every landmine cites a URL that appears in one of the input
+ partials. No invented URLs.
+- [ ] No claim contradicts a fact-checked cell in matrix.json.
+- [ ] No talk track claims a user feature where matrix.json shows
+ userCompany.features[X] = false.
+- [ ] Objections are realistic (what a prospect would actually raise),
+ not strawmen.
+
+OUTPUT — write via a single heredoc to
+ {OUTPUT_DIR}/partials/{COMPETITOR_SLUG}.battle.md
+
+cat << 'BATTLE_MD' > {OUTPUT_DIR}/partials/{COMPETITOR_SLUG}.battle.md
+---
+competitor_name: {COMPETITOR_NAME}
+lane: battle
+generated_at: {YYYY-MM-DD}
+---
+
+## Battle Card
+
+### Landmines
+
+- **{one-line fact}** — {how to use it in the call}. (source: {url})
+
+### Objection Handlers
+
+- If they say: "{objection verbatim}"
+ You say: {response citing user's moat} (evidence: {url})
+
+### Talk Tracks
+
+1. {1-2 sentence pitch}
+BATTLE_MD
+
+REPORT BACK only one line:
+ "{COMPETITOR_SLUG} battle: {N} landmines, {M} objections, {K} tracks, all cited."
+
+Do NOT return the card content.
+```
+
+## Wave management
+
+- Launch 1 battle-card subagent per competitor. All can run in parallel (synthesis is fast and uses no shared state beyond already-written partials).
+- Depth: only run in `deep` or `deeper` modes. `quick` mode does not have the research depth to ground battle cards credibly.
+- Budget: ~3-5 Bash calls per subagent (1 big cat, 1 big heredoc, maybe 1-2 sanity checks).
diff --git a/skills/competitor-analysis/references/battle-card.md b/skills/competitor-analysis/references/battle-card.md
new file mode 100644
index 0000000..dbba15a
--- /dev/null
+++ b/skills/competitor-analysis/references/battle-card.md
@@ -0,0 +1,91 @@
+# Battle Card — format spec
+
+The Battle lane is the **6th** subagent lane in deep/deeper mode. It runs AFTER Step 5c fact-check completes — it reads only existing partials + the fact-checked `matrix.json`, **never makes new `bb` calls**. This is a pure synthesis lane.
+
+Output file: `{OUTPUT_DIR}/partials/{slug}.battle.md`. `merge_partials.mjs` unions its `## Battle Card` section into the consolidated `{slug}.md`. `compile_report.mjs` renders it as a brand-accented card on the per-competitor HTML page.
+
+## The three sections
+
+### Landmines (3-5 items)
+
+Concrete, verifiable facts about the competitor that **hurt them in a deal**. Every item must cite a URL from an existing partial (Mentions, Benchmarks, or Research Findings). Prefer third-party evidence (benchmarks, reviews, news) over the competitor's own marketing — marketing claims are weak ammunition.
+
+Format:
+```
+### Landmines
+
+- **{one-line factual claim}** — {how an AE uses it in the call}. (source: {url})
+```
+
+Example:
+```
+- **Anchor won Halluminate's November 2025 stealth benchmark (1.7% fail rate)** — use if prospect worries about detection, but only after confirming their volume tier; Anchor's CAPTCHA product is paywalled behind Starter ($20/mo). (source: https://halluminate.com/browserbench)
+```
+
+### Objection Handlers (3-5 items)
+
+Format: "if prospect says X → you say Y, citing a real user moat from `userCompany.winningSummary`." Every response must reference a feature/integration the fact-checked matrix confirms the user has. Never respond with a claim that contradicts a fact-checked matrix cell.
+
+Format:
+```
+### Objection Handlers
+
+- If they say: "{objection verbatim}"
+ You say: {response citing user's moat} (evidence: {url})
+```
+
+Example:
+```
+- If they say: "Hyperbrowser is $99/mo cheaper than your Scale tier"
+ You say: "Hyperbrowser drops replay this quarter — you'll lose session video when you hit production. Our Scale tier includes session inspector + video recording; matrix.json confirms Hyperbrowser's feature set doesn't cover either." (evidence: https://docs.hyperbrowser.ai/changelog)
+```
+
+### Talk Tracks (2-3 items)
+
+One-to-two sentence opening pitches an AE can memorize. Lead with a user winningSummary differentiator; name the specific gap in the competitor. No hyperbole, no claims not grounded in fact-checked matrix cells.
+
+Format:
+```
+### Talk Tracks
+
+1. {1-2 sentence pitch}
+```
+
+Example:
+```
+1. For production observability, Browserbase is the only provider in the category with BOTH session video recording AND a session inspector UI — Hyperbrowser shipped neither, Anchor shipped neither, and Kernel replaced video replay with rrweb-only last quarter.
+```
+
+## Markdown file shape
+
+```markdown
+---
+competitor_name: Hyperbrowser
+lane: battle
+generated_at: 2026-04-24
+---
+
+## Battle Card
+
+### Landmines
+- **Fact 1** — usage. (source: url)
+- **Fact 2** — usage. (source: url)
+
+### Objection Handlers
+- If they say: "..."
+ You say: ... (evidence: url)
+
+### Talk Tracks
+1. Pitch 1
+2. Pitch 2
+```
+
+## Quality gates — Adversarial self-check (subagent MUST run before writing)
+
+- [ ] Every landmine cites a URL that appears in one of the input partials (Mentions / Benchmarks / Research Findings). No invented URLs.
+- [ ] No claim contradicts a fact-checked cell in `matrix.json` (cells must have a `sources` URL to be trustworthy).
+- [ ] No talk track claims a user feature where `matrix.json` shows `userCompany.features[X] = false`.
+- [ ] Objections are realistic — they're what a prospect would actually raise based on the competitor's strongest marketing lines, not strawmen.
+- [ ] Third-party evidence preferred over competitor's own marketing (benchmarks, reviews, news > their docs/pricing).
+
+If a potential landmine has no evidence in the partials, OMIT it. It is better to ship 3 cited landmines than 5 half-invented ones.
diff --git a/skills/competitor-analysis/references/example-research.md b/skills/competitor-analysis/references/example-research.md
new file mode 100644
index 0000000..687cb04
--- /dev/null
+++ b/skills/competitor-analysis/references/example-research.md
@@ -0,0 +1,129 @@
+# Example Competitor Research File
+
+## Contents
+- [Template](#template) — full worked example for a fictional "Rival Co"
+- [Field Rules](#field-rules) — frontmatter fields, body section order, mention/findings format
+- [Writing via Bash Heredoc](#writing-via-bash-heredoc) — required pattern for subagents to avoid permission prompts
+
+Each enrichment subagent writes one markdown file per competitor to `{OUTPUT_DIR}/{competitor-slug}.md`, where `{OUTPUT_DIR}` is the per-run Desktop directory set up by the main agent in Step 0 (e.g., `~/Desktop/acme_competitors_2026-04-23/`). The YAML frontmatter contains structured fields for report/matrix compilation. The body contains per-section research plus aggregated mentions and benchmarks.
+
+## Template
+
+```markdown
+---
+competitor_name: Rival Co
+website: https://rivalco.com
+tagline: The fastest way to ship browser agents
+positioning: Developer-first headless browser API
+product_description: Cloud-hosted headless browser infrastructure for AI agents and scrapers
+target_customer: AI engineers, scraping teams, SaaS companies
+pricing_model: Usage-based + seat tiers
+pricing_tiers: Free (100 min) | Pro $99/mo | Scale $499/mo | Enterprise Contact
+key_features: stealth proxy | session replay | CAPTCHA solving | CDP protocol | Playwright driver
+integrations: Playwright | Puppeteer | Stagehand | LangChain
+headquarters: San Francisco, CA
+founded: 2023
+employee_estimate: 11-50
+funding_info: Seed, $5M (2024)
+strategic_diff: Similar infra; weaker in stealth, but cheaper entry tier
+---
+
+## Product
+Cloud-hosted headless browser infrastructure. Exposes CDP-compatible sessions with
+built-in stealth, proxies, and CAPTCHA solving. Positioned at AI agents and scraping teams.
+
+## Pricing
+- Free: 100 browser minutes/month, 1 concurrent session
+- Pro ($99/mo): 10K minutes, 5 concurrent, basic proxies
+- Scale ($499/mo): 100K minutes, 50 concurrent, residential proxies, session replay
+- Enterprise: custom pricing, SSO, dedicated support
+
+## Features
+- Stealth mode with fingerprint rotation
+- Residential proxy pool (180+ countries)
+- Auto-CAPTCHA solving
+- Session replay / video recording
+- CDP-compatible WebSocket API
+- Playwright, Puppeteer, Selenium drivers
+
+## Positioning
+Marketing emphasizes "AI-native" and developer-first DX. Landing page hero:
+"Give your agents a browser." Targets solo devs through mid-market AI teams.
+
+## Comparison vs {user_company}
+- **Overlaps**: Headless browser cloud, CDP API, Playwright driver, proxy support
+- **Gaps**: No session inspector UI, no Stagehand-equivalent high-level library, weaker stealth benchmarks
+- **Where they win**: Lower entry price ($99 vs $199), simpler pricing tiers
+- **Where you win**: Stronger stealth (per public benchmarks), better observability, larger integration ecosystem
+
+## Mentions
+- **[Benchmark]** computesdk/benchmarks PR #92 — Rival Co 73% pass rate on stealth tests (source: https://github.com/computesdk/benchmarks/pull/92, 2026-03-14)
+- **[Comparison]** Browserbase vs Rival Co — side-by-side review (source: https://example.com/browserbase-vs-rivalco, 2026-02-01)
+- **[Reddit]** r/webscraping thread: "Moved from Rival Co to X after CAPTCHA issues" — 24 upvotes (source: https://reddit.com/r/webscraping/comments/abc123)
+- **[HN]** "Show HN: Rival Co raises seed to build..." — 112 points, 48 comments (source: https://news.ycombinator.com/item?id=12345)
+- **[LinkedIn]** CEO post on product launch — 412 reactions (source: https://linkedin.com/posts/rivalco-launch)
+- **[YouTube]** "Rival Co vs Browserbase" review by Dev YouTuber — 8.2K views (source: https://youtube.com/watch?v=xyz)
+- **[News]** TechCrunch coverage of seed round (source: https://techcrunch.com/2024/11/rival-co-seed)
+- **[Review]** G2 4.3/5 (31 reviews), main complaint: flaky sessions (source: https://g2.com/products/rival-co)
+
+## Benchmarks
+- **computesdk/benchmarks PR #92** — Rival Co 73% pass rate on stealth, 4th of 7 tested (https://github.com/computesdk/benchmarks/pull/92)
+- **headless-bench blog** — Rival Co 1.8s cold start, 2nd fastest (https://example.com/headless-bench-2026)
+
+## Research Findings
+- **[high]** Usage-based pricing starts at $99/mo for 10K minutes (source: rivalco.com/pricing)
+- **[high]** Series seed, $5M raised Nov 2024 (source: TechCrunch)
+- **[medium]** CEO LinkedIn emphasizes AI-agent use cases (source: linkedin.com/in/rivalco-ceo)
+- **[low]** Possibly a team under 20 based on careers page (source: rivalco.com/careers)
+
+## Battle Card
+
+### Landmines
+- **Rival Co scores 73% on the computesdk stealth benchmark (4th of 7 tested)** — use against stealth-forward prospects; they rank below Browserbase and Hyperbrowser on the same test. (source: https://github.com/computesdk/benchmarks/pull/92)
+- **G2 average 4.3/5 with "flaky sessions" as top complaint across 31 reviews** — cite when prospect raises reliability concerns. (source: https://g2.com/products/rival-co)
+
+### Objection Handlers
+- If they say: "Rival Co is $99/mo — cheaper than your Pro tier"
+ You say: "Cheaper upfront, but compare total cost of stealth incidents — their 73% benchmark pass rate means ~1 in 4 requests hits a challenge page you'll need to retry, and retries aren't free." (evidence: https://github.com/computesdk/benchmarks/pull/92)
+
+### Talk Tracks
+1. For production workloads where session reliability matters, Browserbase ships session inspector + video recording as table stakes; Rival Co has neither in their 2024 product set.
+```
+
+## Field Rules
+
+- **YAML frontmatter**: All structured fields go here. Extracted for matrix + CSV compilation.
+- **`pricing_tiers`**: Pipe-separated (`|`) with tier name + short price. `compile_report.mjs` parses on `|` for the matrix view.
+- **`key_features`**, **`integrations`**: Pipe-separated lists.
+- **`strategic_diff`**: One-line summary (shown in overview table).
+- **Body sections**: `## Product`, `## Pricing`, `## Features`, `## Positioning`, `## Comparison vs {user_company}`, `## Mentions`, `## Benchmarks`, `## Research Findings`, `## Battle Card` (deep/deeper modes only; synthesized by the Battle lane after fact-check).
+- **Mentions format**: `- **[SourceType]** title | snippet (source: url, date)` — `SourceType` is one of `Benchmark`, `Comparison`, `News`, `Reddit`, `HN`, `LinkedIn`, `YouTube`, `Review`, `Podcast`, `X`.
+- **Findings format**: `- **[confidence]** fact (source: url)` — `confidence` is `high`, `medium`, or `low`.
+- **Filename**: `{OUTPUT_DIR}/{competitor-slug}.md` where slug is lowercase, hyphenated.
+
+## Writing via Bash Heredoc
+
+Subagents write these files using bash heredoc to avoid security prompts. Use the full literal `{OUTPUT_DIR}` path — no `~` or `$HOME`:
+
+```bash
+cat << 'COMPETITOR_MD' > {OUTPUT_DIR}/rival-co.md
+---
+competitor_name: Rival Co
+website: https://rivalco.com
+...
+---
+
+## Product
+...
+
+## Pricing
+...
+
+## Mentions
+- **[Benchmark]** ...
+COMPETITOR_MD
+```
+
+Use `'COMPETITOR_MD'` (quoted) as the delimiter to prevent shell variable expansion.
+
+**IMPORTANT**: Write ALL competitor files in a SINGLE Bash call using chained heredocs to minimize permission prompts.
diff --git a/skills/competitor-analysis/references/report-template.html b/skills/competitor-analysis/references/report-template.html
new file mode 100644
index 0000000..023db53
--- /dev/null
+++ b/skills/competitor-analysis/references/report-template.html
@@ -0,0 +1,127 @@
+
+
+
+
+
+Competitor Analysis — {{TITLE}}
+
+
+
+
+
+
+
+
+
+
diff --git a/skills/competitor-analysis/references/research-patterns.md b/skills/competitor-analysis/references/research-patterns.md
new file mode 100644
index 0000000..236319a
--- /dev/null
+++ b/skills/competitor-analysis/references/research-patterns.md
@@ -0,0 +1,217 @@
+# Competitor Analysis — Research Patterns
+
+## Contents
+- [Overview](#overview) — two research contexts (self vs target)
+- [Self-Research (User's Company)](#self-research-users-company) — sub-questions, page discovery, synthesis output (precise_category, include keywords, exclusion list)
+- [Competitor Research — 4 Research Lanes](#competitor-research--4-research-lanes) — Marketing / External / Benchmarks / Strategic Diff
+- [Depth Mode Behavior](#depth-mode-behavior) — quick / deep / deeper budgets and scope
+- [Finding Format (per lane)](#finding-format-per-lane) — JSON shape, confidence levels
+- [Research Loop Rules](#research-loop-rules) — 7 meta-rules for the research phase
+- [Synthesis Instructions](#synthesis-instructions) — turn findings into matrix cells
+
+## Overview
+
+Two research contexts:
+1. **Self-Research** (Step 1) — Deep research on the user's company so we know what "competitor" means for this run.
+2. **Competitor Research** (Step 4) — For each discovered/seeded competitor, run the 4-lane enrichment below.
+
+Both use the Plan → Research → Synthesize pattern. Self-research is identical in shape to the one in `company-research`, so profiles can be reused across skills.
+
+## Self-Research (User's Company)
+
+### Sub-Questions
+- "What does {company} sell and what specific problem does it solve?"
+- "Who are {company}'s existing customers? What industries, company sizes, use cases?"
+- "Who are {company}'s known competitors? What category do they compete in?"
+- "What pricing model does {company} use?"
+- "What features, integrations, and differentiators does {company}'s marketing emphasize?"
+
+### Page Discovery
+Dynamic via sitemap — do NOT hardcode `/about` or `/pricing`:
+1. `bb fetch --allow-redirects "{company website}/sitemap.xml"` — primary source
+2. Scan for URLs with keywords: `pricing`, `customer`, `compare`, `vs`, `about`, `features`, `integrations`
+3. Optionally fetch `/llms.txt` for page descriptions
+4. Pick 3-5 most relevant URLs
+
+### External Research
+- `bb search "{company} alternatives competitors vs"`
+- `bb search "{company} review comparison"`
+- Fetch 1-2 most informative third-party pages
+
+### Synthesis Output
+Produce a profile with:
+- **Company**, **Product**, **Existing Customers**, **Competitors** (seed list), **Use Cases**
+- **precise_category** — one clear sentence that describes what category this product competes in. Avoid fuzzy words like "tools" or "platform". Good: "cloud headless browser infrastructure for AI agents exposing CDP". Bad: "browser automation tools". This becomes the anchor for discovery queries and the gate.
+- **category_include_keywords** — 8-15 phrases that a *direct competitor's* marketing would very likely contain (title or hero). Include semantic variants. e.g. for Browserbase: `cloud browser`, `headless browser`, `browser infrastructure`, `browser infra`, `browser api`, `infra for ai agents`, `browser for agents`, `managed chromium`, `cdp`, `remote browser`, `infrastructure for computer use`, `agents and automations`.
+- **exclusion_list** — phrases that indicate a *different* category, used by the gate to reject false positives. e.g. `antidetect browser`, `multilogin`, `scraping api`, `web scraping api`, `screenshot api`, `residential proxy`, `proxy rotation`, `open-source ai browser` (end-user local browsers, not cloud infra), `privacy-first browser`.
+
+The same `profiles/{company-slug}.json` shape used by `company-research`, extended with the three new fields. The `competitors` array becomes the seed list and the first inputs to the comparison-graph expansion in Step 3.
+
+---
+
+## Competitor Research — 4 Research Lanes
+
+For each competitor, run these four lanes (depth-gated):
+
+### Lane 1 — Marketing Surface (ALL depth modes)
+Goal: extract what the competitor says about themselves from their own site.
+
+**Sub-questions**:
+- "What does {competitor} sell, who is it for, and how is it positioned?"
+- "What are {competitor}'s pricing tiers and pricing model?"
+- "What key features, integrations, and platforms does {competitor} list?"
+
+**Pages to fetch** (via sitemap discovery — do NOT hardcode):
+1. Homepage
+2. `/pricing` (or equivalent from sitemap)
+3. `/features`, `/product`, `/platform`, `/solutions`
+4. `/integrations`, `/customers`, `/case-studies`
+
+**Extract into frontmatter fields**: `tagline`, `positioning`, `product_description`, `target_customer`, `pricing_model`, `pricing_tiers`, `key_features`, `integrations`.
+
+### Lane 2 — External Signal (deep + deeper)
+Goal: what the rest of the internet says about them.
+
+**Sub-questions**:
+- "What third-party comparison pages mention {competitor}?"
+- "What do users say on Reddit, HN, G2, Capterra?"
+- "What recent news, launches, or announcements?"
+- "Who is talking about them on LinkedIn or YouTube?"
+
+**Search queries**:
+```
+"{competitor} vs"
+"{competitor} alternatives"
+"{competitor} review"
+"{competitor} G2" / "{competitor} Capterra"
+"site:reddit.com {competitor}"
+"site:news.ycombinator.com {competitor}"
+"site:linkedin.com/posts {competitor}"
+"site:youtube.com {competitor}"
+"{competitor} launch 2025 OR 2026"
+"{competitor} funding announcement"
+```
+
+**Extraction rule**: From search results, harvest each hit as a `Mentions` entry. Classify source type from the URL:
+- `reddit.com` → `Reddit`
+- `news.ycombinator.com` → `HN`
+- `linkedin.com` → `LinkedIn`
+- `youtube.com` / `youtu.be` → `YouTube`
+- `g2.com` / `capterra.com` / `trustradius.com` → `Review`
+- `*vs*` in path or title → `Comparison`
+- news domains (techcrunch, theverge, venturebeat, forbes, businesswire, globenewswire) → `News`
+- `twitter.com` / `x.com` → `X`
+- `spotify.com/episode` / transistor/simplecast → `Podcast`
+
+For LinkedIn and YouTube, the snippet + URL from `bb search` is enough. Do NOT try to deep-fetch individual LinkedIn posts (auth walls) — list them with title/snippet.
+
+### Lane 3 — Public Benchmarks (deeper only)
+Goal: find third-party benchmarks that measured this competitor's product.
+
+**Sub-questions**:
+- "Has {competitor} been included in any public benchmark?"
+- "Are there GitHub repos, PRs, or blog posts comparing {competitor} head-to-head on a measured axis (speed, accuracy, cost, pass rate)?"
+
+**Search queries**:
+```
+"{competitor} benchmark"
+"{competitor} performance test"
+"site:github.com {competitor} benchmark"
+"site:github.com {competitor} vs"
+"{competitor} vs {seed_competitor} benchmark" # pairwise, use another known competitor as the seed
+"{category} benchmark {competitor}" # e.g. "headless browser benchmark {competitor}"
+```
+
+**Extraction**: Add each hit to `Benchmarks` section with: title, source, URL, key finding (one line). Also mirror into `Mentions` with type `Benchmark`.
+
+**Known benchmark repos to check directly** (if domain is on-topic):
+- `github.com/computesdk/benchmarks`
+- Category-specific benchmark repos discovered via the first search wave
+
+### Lane 4 — Strategic Diff vs User's Company (deeper only)
+Goal: explicitly compare this competitor to the user's company.
+
+**Inputs**: `{user_company_profile}` (from Step 1) — specifically `product`, `use_cases`, `key_features` if available.
+
+**Sub-questions**:
+- "What features does {competitor} have that {user_company} does not?"
+- "What features does {user_company} have that {competitor} does not?"
+- "Who does {competitor} serve that {user_company} does not (and vice versa)?"
+- "Where does each one win on the marketing surface (price, feature depth, DX, ecosystem)?"
+
+**No new fetches required** for this lane — it's a synthesis step over Lane 1 + 2 + 3 findings plus the user's profile. Write as:
+
+```markdown
+## Comparison vs {user_company}
+- **Overlaps**: ...
+- **Gaps**: ...
+- **Where they win**: ...
+- **Where you win**: ...
+```
+
+Also populate the `strategic_diff` frontmatter field with a one-line summary for the overview table.
+
+---
+
+## Depth Mode Behavior
+
+### Quick Mode (~lots of competitors, cheap)
+- **Lanes**: 1 only
+- **Budget**: 2-3 tool calls per competitor (homepage + pricing page)
+- **Fields populated**: tagline, product_description, pricing_tiers, key_features
+- **Mentions / Benchmarks / Comparison**: skipped
+
+### Deep Mode (balanced, default)
+- **Lanes**: 1 + 2
+- **Budget**: 5-8 tool calls per competitor
+- **Everything in quick** + 5-10 mentions across source types
+
+### Deeper Mode (full intel)
+- **Lanes**: 1 + 2 + 3 + 4
+- **Budget**: 10-15 tool calls per competitor
+- **Everything in deep** + benchmarks section + strategic diff section
+
+---
+
+## Finding Format (per lane)
+
+Every finding is a factual statement tied to a source:
+
+```json
+{
+ "lane": "marketing | external | benchmark | strategic",
+ "fact": "Rival Co charges $99/mo for 10K browser minutes",
+ "sourceUrl": "https://rivalco.com/pricing",
+ "confidence": "high"
+}
+```
+
+**Confidence**:
+- `high`: Directly stated on the competitor's own website or official press
+- `medium`: Inferred from third-party articles, reviews, or job posts
+- `low`: Speculative / outdated sources
+
+## Research Loop Rules
+
+1. **Lane 1 first** — always start with the competitor's own site
+2. **Use sitemap, not hardcoded paths** — `/pricing` might be `/plans` or `/pricing-plans`
+3. **Rephrase, don't retry** — if a search returns generic junk, switch keywords
+4. **Fetch selectively** — pick the 1-2 most promising URLs per query
+5. **For LinkedIn/YouTube: search only, don't fetch** — snippet is enough, avoid auth walls
+6. **Respect step budget** per depth mode
+7. **Deduplicate mentions** — same URL should only appear once in `## Mentions`
+
+## Synthesis Instructions
+
+After the research loop completes for a competitor:
+
+1. Fill frontmatter fields from Lane 1 findings
+2. Write body sections: Product, Pricing, Features, Positioning (all from Lane 1)
+3. Append `## Mentions` from Lane 2 classified hits
+4. Append `## Benchmarks` from Lane 3 (deeper only)
+5. Append `## Comparison vs {user_company}` from Lane 4 synthesis (deeper only)
+6. Append `## Research Findings` as a raw-findings appendix with confidence tags
+
+No ICP score. No threat score. Pure intel.
+
+If a field has no supporting findings, leave it empty rather than guessing.
diff --git a/skills/competitor-analysis/references/workflow.md b/skills/competitor-analysis/references/workflow.md
new file mode 100644
index 0000000..b593c4f
--- /dev/null
+++ b/skills/competitor-analysis/references/workflow.md
@@ -0,0 +1,427 @@
+# Competitor Analysis — Workflow Reference
+
+## Contents
+- [Discovery Batch JSON Schema](#discovery-batch-json-schema) — bb search output format
+- [Competitor Research Markdown Format](#competitor-research-markdown-format) — frontmatter + body section spec
+- [Extracting Text from HTML](#extracting-text-from-html) — bb fetch | jq | sed pipeline
+- [Discovery — parallel Bash, not subagents](#discovery--parallel-bash-not-subagents) — Wave A/B/C recipes
+- [Enrichment fan-out — 5 subagents PER competitor](#enrichment-fan-out--5-subagents-per-competitor-deepdeeper-modes)
+- [Legacy: Single-subagent template](#legacy-single-subagent-template-quick-mode-only) — quick mode only
+- [Wave Management](#wave-management) — parallelism rule, gate phase, sizing formula
+- [Report Compilation](#report-compilation) — compile_report.mjs invocation
+
+## Discovery Batch JSON Schema
+
+File: `/tmp/competitor_discovery_batch_{N}.json`
+
+`bb search --output` writes a JSON object:
+
+```json
+{
+ "requestId": "abc123",
+ "query": "alternatives to acme",
+ "results": [
+ { "url": "https://example.com", "title": "Example Corp", "author": null, "publishedDate": null }
+ ]
+}
+```
+
+The `list_urls.mjs` script (run with `--prefix competitor`) deduplicates across batches.
+
+## Competitor Research Markdown Format
+
+File: `{OUTPUT_DIR}/{competitor-slug}.md` — see `references/example-research.md` for the full template.
+
+**YAML frontmatter fields** (used by `compile_report.mjs`):
+- `competitor_name` (required)
+- `website` (required)
+- `tagline`
+- `positioning`
+- `product_description`
+- `target_customer`
+- `pricing_model`
+- `pricing_tiers` (pipe-separated: `Free | Pro $99 | Enterprise Contact`)
+- `key_features` (pipe-separated)
+- `integrations` (pipe-separated)
+- `headquarters`
+- `founded`
+- `employee_estimate`
+- `funding_info`
+- `strategic_diff` (one-line for overview table; deeper mode only)
+
+**Body sections** (in this order — `compile_report.mjs` parses by heading):
+- `## Product`
+- `## Pricing`
+- `## Features`
+- `## Positioning`
+- `## Comparison vs {user_company}` (deeper only)
+- `## Mentions`
+- `## Benchmarks` (deeper only)
+- `## Research Findings`
+
+**Mentions line format** (parsed into the mentions feed):
+```
+- **[SourceType]** Title | Snippet (source: URL, YYYY-MM-DD)
+```
+`SourceType` ∈ `Benchmark | Comparison | News | Reddit | HN | LinkedIn | YouTube | Review | Podcast | X`. Date is optional but preferred.
+
+## Extracting Text from HTML
+
+`bb fetch --allow-redirects` returns raw HTML. To extract readable text in one pipe:
+
+```bash
+bb fetch --allow-redirects "https://rivalco.com/pricing" | sed 's/
+