Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
7e66350
feat: add competitor-analysis skill
jay-sahnan Apr 24, 2026
9645433
fix(competitor-analysis): harden merge_partials against subagent form…
jay-sahnan Apr 24, 2026
356c777
feat(competitor-analysis): add mandatory Step 4.5 — confirm enrichmen…
jay-sahnan Apr 24, 2026
583f581
fix(competitor-analysis): normalize mention-bullet format during merge
jay-sahnan Apr 24, 2026
ab4d53f
refactor(competitor-analysis): drop pricing screenshot — hero only
jay-sahnan Apr 24, 2026
b569de6
feat(competitor-analysis): render matrix from curated matrix.json tax…
jay-sahnan Apr 24, 2026
4c75bf0
fix(competitor-analysis): realign matrix column headers
jay-sahnan Apr 24, 2026
03108ef
fix(competitor-analysis): horizontal matrix column headers
jay-sahnan Apr 24, 2026
64b63ff
feat(competitor-analysis): add "Where you're winning / losing" on ove…
jay-sahnan Apr 24, 2026
86b9888
docs(competitor-analysis): require userCompany in matrix.json schema
jay-sahnan Apr 24, 2026
6a8df89
feat(competitor-analysis): mandate fact-check subagent for matrix.json
jay-sahnan Apr 24, 2026
09d904c
feat(competitor-analysis): prose summaries for win/loss cards
jay-sahnan Apr 24, 2026
73f573d
fix(competitor-analysis): mandate user-company research parity with c…
jay-sahnan Apr 24, 2026
1f8d742
feat(competitor-analysis): battle card lane (6th synthesis subagent)
jay-sahnan Apr 24, 2026
e8bb80b
fix(competitor-analysis): accept battle-lane format drift at merge time
jay-sahnan Apr 24, 2026
716516e
fix(competitor-analysis): harden merge + capture scripts on fresh run
jay-sahnan Apr 24, 2026
8eef24f
fix(competitor-analysis): tighten overview table — exclude user, trun…
jay-sahnan Apr 24, 2026
d8702df
fix(competitor-analysis): mentions feed — alias frontmatter + normali…
jay-sahnan Apr 24, 2026
d83b7cc
perf(competitor-analysis): fix 25-min Step 5 wall-clock + skill-creat…
jay-sahnan Apr 24, 2026
0468e06
perf(competitor-analysis): spot-check fact-check by default, 25-call …
jay-sahnan Apr 24, 2026
9f882f2
perf(competitor-analysis): hard-cap research lane tool calls + halve …
jay-sahnan Apr 24, 2026
fb58a51
fix(competitor-analysis): four Cursor Bugbot findings
jay-sahnan Apr 25, 2026
37087d2
bugbot fixes
jay-sahnan Apr 25, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions skills/competitor-analysis/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
profiles/*.json
!profiles/example.json
403 changes: 403 additions & 0 deletions skills/competitor-analysis/SKILL.md

Large diffs are not rendered by default.

127 changes: 127 additions & 0 deletions skills/competitor-analysis/references/battle-card-subagent.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
# Battle Card subagent prompt

## Contents
- [Placeholders to substitute](#placeholders-to-substitute) — `{OUTPUT_DIR}`, `{COMPETITOR_SLUG}`, etc.
- [Prompt](#prompt) — full subagent instruction template (paste with placeholders filled in)
- [Wave management](#wave-management) — launch policy: one Agent message per run, all competitors in parallel

Main agent substitutes placeholders per competitor. Launch AFTER Step 5c fact-check completes — this lane depends on `matrix.json` cells having `sources` URLs.

## Placeholders to substitute

- `{OUTPUT_DIR}` → full literal path, e.g. `/Users/jay/Desktop/browserbase_competitors_2026-04-24-1930`
- `{COMPETITOR_SLUG}` → e.g. `hyperbrowser`
- `{COMPETITOR_NAME}` → e.g. `Hyperbrowser`
- `{USER_SLUG}` → e.g. `browserbase`
- `{USER_COMPANY_NAME}` → e.g. `Browserbase`
- `{USER_PRODUCT_ONE_LINER}` → pulled from Step 1 profile
- `{USER_WINNING_SUMMARY}` → matrix.json `userCompany.winningSummary`
- `{USER_LOSING_SUMMARY}` → matrix.json `userCompany.losingSummary`

## Prompt

```
You are the Battle Card synthesis subagent. Produce an evidence-grounded
battle card a real AE would use on a call.

TOOL RULES — CRITICAL, FOLLOW EXACTLY:
1. You may ONLY use the Bash tool. No exceptions.
2. BANNED TOOLS: WebFetch, WebSearch, Write, Read, Glob, Grep, bb search,
bb fetch — ALL BANNED. This is a SYNTHESIS lane, not a research lane.
You read files that already exist; you do not make new network calls.
3. Read ALL inputs in ONE Bash call via `cat`. Write output in ONE heredoc.
4. NEVER use ~ or $HOME — full literal paths only.

INPUTS (all already exist on disk — read in one Bash call):
- {OUTPUT_DIR}/partials/{COMPETITOR_SLUG}.marketing.md
- {OUTPUT_DIR}/partials/{COMPETITOR_SLUG}.discussion.md
- {OUTPUT_DIR}/partials/{COMPETITOR_SLUG}.social.md
- {OUTPUT_DIR}/partials/{COMPETITOR_SLUG}.news.md
- {OUTPUT_DIR}/partials/{COMPETITOR_SLUG}.technical.md
- {OUTPUT_DIR}/{USER_SLUG}.md # user's own merged file
- {OUTPUT_DIR}/matrix.json # fact-checked matrix — cells
# must have a `sources` URL to
# be trustworthy; reject any
# cell without one

CONTEXT:
- User's company: {USER_COMPANY_NAME}
- User's product: {USER_PRODUCT_ONE_LINER}
- User's verified moats (from matrix.json userCompany.winningSummary):
{USER_WINNING_SUMMARY}
- User's verified gaps (from matrix.json userCompany.losingSummary):
{USER_LOSING_SUMMARY}
- Competitor: {COMPETITOR_NAME}
- Competitor slug: {COMPETITOR_SLUG}

TASK — produce three sections, every claim traceable to an input bullet
or matrix.sources URL:

1. LANDMINES (3-5 items) — concrete verifiable facts that HURT
{COMPETITOR_NAME} in a deal. Each:
- States a specific, verifiable fact (not "they're slow" — "their
p50 was 3.4s on the Nov 2025 Halluminate benchmark")
- Cites a source URL pulled from an actual bullet in one of the
input partials (Mentions / Benchmarks / Research Findings)
- Includes a one-line "how to use it" talking point
- Prefers third-party sources over competitor's own marketing
- If no evidence exists for a potential landmine, OMIT it. 3 cited
landmines > 5 half-invented ones.

2. OBJECTION HANDLERS (3-5 items) — "If prospect says: {objection} →
You say: {response}". Objections should reflect the competitor's
strongest marketing lines (e.g. if their homepage says "99.99%
uptime", the objection is "we hear {user} has no uptime guarantee").
Responses must reference a real user moat from winningSummary —
never a hallucinated feature.

3. TALK TRACKS (2-3 items) — 1-2 sentence opening pitches. Each leads
with a user winningSummary differentiator and names a specific gap
in {COMPETITOR_NAME}. Confident, factual, no hyperbole.

ADVERSARIAL SELF-CHECK before writing:
- [ ] Every landmine cites a URL that appears in one of the input
partials. No invented URLs.
- [ ] No claim contradicts a fact-checked cell in matrix.json.
- [ ] No talk track claims a user feature where matrix.json shows
userCompany.features[X] = false.
- [ ] Objections are realistic (what a prospect would actually raise),
not strawmen.

OUTPUT — write via a single heredoc to
{OUTPUT_DIR}/partials/{COMPETITOR_SLUG}.battle.md

cat << 'BATTLE_MD' > {OUTPUT_DIR}/partials/{COMPETITOR_SLUG}.battle.md
---
competitor_name: {COMPETITOR_NAME}
lane: battle
generated_at: {YYYY-MM-DD}
---

## Battle Card

### Landmines

- **{one-line fact}** — {how to use it in the call}. (source: {url})

### Objection Handlers

- If they say: "{objection verbatim}"
You say: {response citing user's moat} (evidence: {url})

### Talk Tracks

1. {1-2 sentence pitch}
BATTLE_MD

REPORT BACK only one line:
"{COMPETITOR_SLUG} battle: {N} landmines, {M} objections, {K} tracks, all cited."

Do NOT return the card content.
```

## Wave management

- Launch 1 battle-card subagent per competitor. All can run in parallel (synthesis is fast and uses no shared state beyond already-written partials).
- Depth: only run in `deep` or `deeper` modes. `quick` mode does not have the research depth to ground battle cards credibly.
- Budget: ~3-5 Bash calls per subagent (1 big cat, 1 big heredoc, maybe 1-2 sanity checks).
91 changes: 91 additions & 0 deletions skills/competitor-analysis/references/battle-card.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
# Battle Card — format spec

The Battle lane is the **6th** subagent lane in deep/deeper mode. It runs AFTER Step 5c fact-check completes — it reads only existing partials + the fact-checked `matrix.json`, **never makes new `bb` calls**. This is a pure synthesis lane.

Output file: `{OUTPUT_DIR}/partials/{slug}.battle.md`. `merge_partials.mjs` unions its `## Battle Card` section into the consolidated `{slug}.md`. `compile_report.mjs` renders it as a brand-accented card on the per-competitor HTML page.

## The three sections

### Landmines (3-5 items)

Concrete, verifiable facts about the competitor that **hurt them in a deal**. Every item must cite a URL from an existing partial (Mentions, Benchmarks, or Research Findings). Prefer third-party evidence (benchmarks, reviews, news) over the competitor's own marketing — marketing claims are weak ammunition.

Format:
```
### Landmines

- **{one-line factual claim}** — {how an AE uses it in the call}. (source: {url})
```

Example:
```
- **Anchor won Halluminate's November 2025 stealth benchmark (1.7% fail rate)** — use if prospect worries about detection, but only after confirming their volume tier; Anchor's CAPTCHA product is paywalled behind Starter ($20/mo). (source: https://halluminate.com/browserbench)
```

### Objection Handlers (3-5 items)

Format: "if prospect says X → you say Y, citing a real user moat from `userCompany.winningSummary`." Every response must reference a feature/integration the fact-checked matrix confirms the user has. Never respond with a claim that contradicts a fact-checked matrix cell.

Format:
```
### Objection Handlers

- If they say: "{objection verbatim}"
You say: {response citing user's moat} (evidence: {url})
```

Example:
```
- If they say: "Hyperbrowser is $99/mo cheaper than your Scale tier"
You say: "Hyperbrowser drops replay this quarter — you'll lose session video when you hit production. Our Scale tier includes session inspector + video recording; matrix.json confirms Hyperbrowser's feature set doesn't cover either." (evidence: https://docs.hyperbrowser.ai/changelog)
```

### Talk Tracks (2-3 items)

One-to-two sentence opening pitches an AE can memorize. Lead with a user winningSummary differentiator; name the specific gap in the competitor. No hyperbole, no claims not grounded in fact-checked matrix cells.

Format:
```
### Talk Tracks

1. {1-2 sentence pitch}
```

Example:
```
1. For production observability, Browserbase is the only provider in the category with BOTH session video recording AND a session inspector UI — Hyperbrowser shipped neither, Anchor shipped neither, and Kernel replaced video replay with rrweb-only last quarter.
```

## Markdown file shape

```markdown
---
competitor_name: Hyperbrowser
lane: battle
generated_at: 2026-04-24
---

## Battle Card

### Landmines
- **Fact 1** — usage. (source: url)
- **Fact 2** — usage. (source: url)

### Objection Handlers
- If they say: "..."
You say: ... (evidence: url)

### Talk Tracks
1. Pitch 1
2. Pitch 2
```

## Quality gates — Adversarial self-check (subagent MUST run before writing)

- [ ] Every landmine cites a URL that appears in one of the input partials (Mentions / Benchmarks / Research Findings). No invented URLs.
- [ ] No claim contradicts a fact-checked cell in `matrix.json` (cells must have a `sources` URL to be trustworthy).
- [ ] No talk track claims a user feature where `matrix.json` shows `userCompany.features[X] = false`.
- [ ] Objections are realistic — they're what a prospect would actually raise based on the competitor's strongest marketing lines, not strawmen.
- [ ] Third-party evidence preferred over competitor's own marketing (benchmarks, reviews, news > their docs/pricing).

If a potential landmine has no evidence in the partials, OMIT it. It is better to ship 3 cited landmines than 5 half-invented ones.
129 changes: 129 additions & 0 deletions skills/competitor-analysis/references/example-research.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
# Example Competitor Research File

## Contents
- [Template](#template) — full worked example for a fictional "Rival Co"
- [Field Rules](#field-rules) — frontmatter fields, body section order, mention/findings format
- [Writing via Bash Heredoc](#writing-via-bash-heredoc) — required pattern for subagents to avoid permission prompts

Each enrichment subagent writes one markdown file per competitor to `{OUTPUT_DIR}/{competitor-slug}.md`, where `{OUTPUT_DIR}` is the per-run Desktop directory set up by the main agent in Step 0 (e.g., `~/Desktop/acme_competitors_2026-04-23/`). The YAML frontmatter contains structured fields for report/matrix compilation. The body contains per-section research plus aggregated mentions and benchmarks.

## Template

```markdown
---
competitor_name: Rival Co
website: https://rivalco.com
tagline: The fastest way to ship browser agents
positioning: Developer-first headless browser API
product_description: Cloud-hosted headless browser infrastructure for AI agents and scrapers
target_customer: AI engineers, scraping teams, SaaS companies
pricing_model: Usage-based + seat tiers
pricing_tiers: Free (100 min) | Pro $99/mo | Scale $499/mo | Enterprise Contact
key_features: stealth proxy | session replay | CAPTCHA solving | CDP protocol | Playwright driver
integrations: Playwright | Puppeteer | Stagehand | LangChain
headquarters: San Francisco, CA
founded: 2023
employee_estimate: 11-50
funding_info: Seed, $5M (2024)
strategic_diff: Similar infra; weaker in stealth, but cheaper entry tier
---

## Product
Cloud-hosted headless browser infrastructure. Exposes CDP-compatible sessions with
built-in stealth, proxies, and CAPTCHA solving. Positioned at AI agents and scraping teams.

## Pricing
- Free: 100 browser minutes/month, 1 concurrent session
- Pro ($99/mo): 10K minutes, 5 concurrent, basic proxies
- Scale ($499/mo): 100K minutes, 50 concurrent, residential proxies, session replay
- Enterprise: custom pricing, SSO, dedicated support

## Features
- Stealth mode with fingerprint rotation
- Residential proxy pool (180+ countries)
- Auto-CAPTCHA solving
- Session replay / video recording
- CDP-compatible WebSocket API
- Playwright, Puppeteer, Selenium drivers

## Positioning
Marketing emphasizes "AI-native" and developer-first DX. Landing page hero:
"Give your agents a browser." Targets solo devs through mid-market AI teams.

## Comparison vs {user_company}
- **Overlaps**: Headless browser cloud, CDP API, Playwright driver, proxy support
- **Gaps**: No session inspector UI, no Stagehand-equivalent high-level library, weaker stealth benchmarks
- **Where they win**: Lower entry price ($99 vs $199), simpler pricing tiers
- **Where you win**: Stronger stealth (per public benchmarks), better observability, larger integration ecosystem

## Mentions
- **[Benchmark]** computesdk/benchmarks PR #92 — Rival Co 73% pass rate on stealth tests (source: https://github.com/computesdk/benchmarks/pull/92, 2026-03-14)
- **[Comparison]** Browserbase vs Rival Co — side-by-side review (source: https://example.com/browserbase-vs-rivalco, 2026-02-01)
- **[Reddit]** r/webscraping thread: "Moved from Rival Co to X after CAPTCHA issues" — 24 upvotes (source: https://reddit.com/r/webscraping/comments/abc123)
- **[HN]** "Show HN: Rival Co raises seed to build..." — 112 points, 48 comments (source: https://news.ycombinator.com/item?id=12345)
- **[LinkedIn]** CEO post on product launch — 412 reactions (source: https://linkedin.com/posts/rivalco-launch)
- **[YouTube]** "Rival Co vs Browserbase" review by Dev YouTuber — 8.2K views (source: https://youtube.com/watch?v=xyz)
- **[News]** TechCrunch coverage of seed round (source: https://techcrunch.com/2024/11/rival-co-seed)
- **[Review]** G2 4.3/5 (31 reviews), main complaint: flaky sessions (source: https://g2.com/products/rival-co)

## Benchmarks
- **computesdk/benchmarks PR #92** — Rival Co 73% pass rate on stealth, 4th of 7 tested (https://github.com/computesdk/benchmarks/pull/92)
- **headless-bench blog** — Rival Co 1.8s cold start, 2nd fastest (https://example.com/headless-bench-2026)

## Research Findings
- **[high]** Usage-based pricing starts at $99/mo for 10K minutes (source: rivalco.com/pricing)
- **[high]** Series seed, $5M raised Nov 2024 (source: TechCrunch)
- **[medium]** CEO LinkedIn emphasizes AI-agent use cases (source: linkedin.com/in/rivalco-ceo)
- **[low]** Possibly a team under 20 based on careers page (source: rivalco.com/careers)

## Battle Card

### Landmines
- **Rival Co scores 73% on the computesdk stealth benchmark (4th of 7 tested)** — use against stealth-forward prospects; they rank below Browserbase and Hyperbrowser on the same test. (source: https://github.com/computesdk/benchmarks/pull/92)
- **G2 average 4.3/5 with "flaky sessions" as top complaint across 31 reviews** — cite when prospect raises reliability concerns. (source: https://g2.com/products/rival-co)

### Objection Handlers
- If they say: "Rival Co is $99/mo — cheaper than your Pro tier"
You say: "Cheaper upfront, but compare total cost of stealth incidents — their 73% benchmark pass rate means ~1 in 4 requests hits a challenge page you'll need to retry, and retries aren't free." (evidence: https://github.com/computesdk/benchmarks/pull/92)

### Talk Tracks
1. For production workloads where session reliability matters, Browserbase ships session inspector + video recording as table stakes; Rival Co has neither in their 2024 product set.
```

## Field Rules

- **YAML frontmatter**: All structured fields go here. Extracted for matrix + CSV compilation.
- **`pricing_tiers`**: Pipe-separated (`|`) with tier name + short price. `compile_report.mjs` parses on `|` for the matrix view.
- **`key_features`**, **`integrations`**: Pipe-separated lists.
- **`strategic_diff`**: One-line summary (shown in overview table).
- **Body sections**: `## Product`, `## Pricing`, `## Features`, `## Positioning`, `## Comparison vs {user_company}`, `## Mentions`, `## Benchmarks`, `## Research Findings`, `## Battle Card` (deep/deeper modes only; synthesized by the Battle lane after fact-check).
- **Mentions format**: `- **[SourceType]** title | snippet (source: url, date)` — `SourceType` is one of `Benchmark`, `Comparison`, `News`, `Reddit`, `HN`, `LinkedIn`, `YouTube`, `Review`, `Podcast`, `X`.
- **Findings format**: `- **[confidence]** fact (source: url)` — `confidence` is `high`, `medium`, or `low`.
- **Filename**: `{OUTPUT_DIR}/{competitor-slug}.md` where slug is lowercase, hyphenated.

## Writing via Bash Heredoc

Subagents write these files using bash heredoc to avoid security prompts. Use the full literal `{OUTPUT_DIR}` path — no `~` or `$HOME`:

```bash
cat << 'COMPETITOR_MD' > {OUTPUT_DIR}/rival-co.md
---
competitor_name: Rival Co
website: https://rivalco.com
...
---

## Product
...

## Pricing
...

## Mentions
- **[Benchmark]** ...
COMPETITOR_MD
```

Use `'COMPETITOR_MD'` (quoted) as the delimiter to prevent shell variable expansion.

**IMPORTANT**: Write ALL competitor files in a SINGLE Bash call using chained heredocs to minimize permission prompts.
Loading