Feature Roadmap — Brainstormed and Debated
This issue captures features brainstormed by three specialist agents (Reviewer/UX Advocate, Agent/Developer Advocate, Product Strategist) and then debated across all perspectives. Only features that survived critical scrutiny from all three viewpoints are listed here.
Each feature was voted on as STRONG YES, MAYBE, or NO by each agent. The consensus level is shown for transparency.
Tier 1 — Unanimous (3/3 STRONG YES)
These features were independently proposed AND unanimously endorsed by all three perspectives.
Inline Diff Preview for Addressed Annotations
Consensus: 3/3 STRONG YES · Complexity: Low-Medium
Problem: When an annotation is marked "addressed", the reviewer sees a status badge and the agent's reply — but has no way to see what actually changed without manually inspecting the source. The replacedText field exists but isn't surfaced visually.
Proposal: Show a compact before/after inline diff in the panel for addressed annotations. For text annotations, diff selectedText vs replacedText. Render red/green highlighting similar to GitHub's inline diff view. Add a "View Diff" toggle on addressed items.
Why it survived: Every agent agreed this is the single biggest gap in the review-fix-verify cycle. Without it, "addressed" is just a trust badge. The data already exists (selectedText + replacedText) — this is primarily a UI rendering task.
Annotation Categories / Severity Tags
Consensus: 3/3 STRONG YES · Complexity: Low-Medium
Problem: All annotations look identical regardless of importance. A "typo in footer" and a "broken auth flow" get equal visual weight. Reviewers can't signal priority, and agents can't triage.
Proposal: Optional category/severity dropdown on annotation creation. Short, fixed list (max 5): Bug, Content, Style, A11y, Question. Display as coloured pills in the panel and on highlights. Expose via MCP for agent filtering/triage. Default is no category — preserving zero-friction creation.
Why it survived: All three perspectives independently proposed this. Improves signal quality for both humans (visual scanning) and agents (intelligent triage). The key constraint: must stay strictly optional with a minimal taxonomy. No configurable categories, no full issue tracker.
Screenshot / Visual Attachment
Consensus: 3/3 STRONG YES · Complexity: Medium
Problem: Some feedback is inherently visual — "the spacing looks wrong", "this doesn't match the design", "the hover state is broken". Text-only notes can't capture visual intent. This is the biggest barrier to designer adoption.
Proposal: Allow reviewers to attach a visual reference to annotations. Scoped approach: a URL/link field for external images (paste a link to a screenshot hosted on Imgur, Figma, etc.) rather than full html2canvas capture with base64 blobs in JSON. Agents with multimodal capabilities receive the image URL via MCP.
Why it survived: The only feature proposed independently by ALL three agents. Unlocks the designer-to-developer workflow. The critical scoping decision: start with external URL references only (low complexity, no JSON bloat), not embedded screenshots. Can be enhanced later.
Tier 2 — Strong Majority (2/3 STRONG YES)
Features with strong support from two perspectives and no fundamental objections.
Source File Mapping (annotation to source file hint)
Consensus: 2/3 STRONG YES (agent, product) · 1/3 MAYBE (reviewer) · Complexity: High
Problem: When an agent receives an annotation, it knows the page URL and selected text but has no idea which source file to edit. It must grep the entire codebase — expensive, slow, and error-prone for common strings. This is the number one pain point in the agent workflow.
Proposal: Add an optional sourceHint field populated by the Vite plugin at annotation creation time. Vite's module graph can map rendered DOM back to source components. The hint would contain { filePath: "src/pages/about.astro", lineRange?: [42, 58] }. Exposed via list_annotations and start_work.
Why it survived: Both the agent and product perspectives ranked this number one. The reviewer perspective was neutral (doesn't directly help reviewers). High complexity but transformative impact — could ship Astro-only first to prove the concept.
Framework Adapter Init CLI (npx review-loop init)
Consensus: 2/3 STRONG YES (agent, product) · 1/3 NO (reviewer) · Complexity: Medium
Problem: Setting up Review Loop requires reading docs, choosing the right adapter, editing config files, and adding .mcp.json. Each framework has different steps. This friction discourages adoption, especially for quick evaluations.
Proposal: Interactive CLI that detects the framework (astro.config.*, vite.config.*, etc.), generates correct integration code, adds .mcp.json, and optionally adds inline-review.json to .gitignore. Ships as a separate bin entry, not runtime code.
Why it survived: First-run experience determines whether someone adopts a tool. The reviewer perspective didn't see value (it's not reviewer-facing), but agent and product perspectives both ranked it top 5. The "try it in 60 seconds" story is critical for growth.
Tier 3 — Supported (1/3 STRONG YES, in multiple top-8 lists)
Features with clear value but lower urgency or narrower scope.
Batch Accept/Reopen for Addressed Annotations
Consensus: 1/3 STRONG YES (reviewer) · 2/3 MAYBE (agent, product) · Complexity: Medium
Problem: When an agent addresses 8+ annotations at once, the reviewer must click into each individually to accept or reopen — tedious and slow.
Proposal: "Review All (N)" button that steps through addressed annotations sequentially with Accept/Reopen/Skip actions, auto-scrolling to each highlight. Optional "Accept All" with confirmation for when the reviewer has already verified everything.
Why it survived: Directly addresses the reviewer bottleneck once an agent is productive. Two agents noted it's quality-of-life rather than essential, but the reviewer advocate made a compelling case that this becomes critical at scale.
Cmd+Enter to Submit Annotations
Consensus: 1/3 STRONG YES (reviewer) · 2/3 MAYBE (agent, product) · Complexity: Low
Problem: The annotation popup has no keyboard shortcut to submit. Reviewers must tab to Save or reach for the mouse — breaking flow on every single annotation.
Proposal: Cmd+Enter (Mac) / Ctrl+Enter (Windows/Linux) to submit. Subtle hint below the textarea. Standard UX convention whose absence is surprising.
Why it survived: Highest frequency-to-effort ratio. Every annotation creation hits this friction. Approximately 20 minutes to implement. All agents agreed it should be done; debate was only about priority relative to larger features.
create_annotation MCP Tool (agent-initiated annotations)
Consensus: 1/3 STRONG YES (agent) · 2/3 MAYBE (reviewer, product) · Complexity: Low
Problem: Agents can only respond to annotations — they can't proactively flag issues. If an agent notices a problem while working (accessibility issue, broken link, inconsistency), it has no way to communicate back through Review Loop.
Proposal: New MCP tool that lets agents create annotations programmatically. Parameters: pageUrl, note, type, optional selectors. Agent-created annotations appear in the panel with a distinct visual indicator. The REST API already supports POST /annotations — this is primarily wiring into MCP.
Why it survived: Turns the one-way reviewer-to-agent flow into bidirectional communication. Near-zero implementation cost since the REST endpoint exists. The reviewer advocate noted the UI needs to handle "surprise" annotations appearing, and the product strategist flagged the mental model inversion as worth careful design.
Honourable Mentions — Interesting but Polarising
These features had strong support from one perspective but were rejected or questioned by others. Worth revisiting as the product matures.
| Feature |
Champion |
Support |
Opposition |
Notes |
| Multi-User Identity |
Product |
"Table-stakes for teams" |
Reviewer + Agent: "Complexity explosion for a single-reviewer tool" |
Revisit when multi-reviewer usage is validated |
| CI Annotation Gate |
Product |
"Makes Review Loop infrastructure" |
Reviewer + Agent: "Conflates conversational annotations with CI gates" |
Could work as a companion package |
| Batch MCP Operations |
Agent |
"16 round-trips to 2" |
Reviewer: "Zero reviewer impact"; Product: "Convenience optimisation" |
Revisit if latency becomes a reported issue |
| Preserve Popup Across HMR |
Reviewer |
"Data-loss bug in core use case" |
Product: "Edge case"; Agent: "Narrow" |
Engineering plan already exists; do when convenient |
| Agent Activity Feed |
Reviewer |
"Visibility into what agent did" |
All: "Risks over-engineering; replies already carry this" |
Could be a lightweight panel enhancement |
Rejected Features
These were proposed but rejected by 2+ perspectives during debate:
- Dry-Run / Preview Mode (3/3 NO) — over-engineers what git undo + reviewer reopen already solves
- Cursor/Copilot Adapters (3/3 NO) — MCP adoption is accelerating; bespoke adapters are building for a shrinking gap
- Resizable Panel (2/3 NO) — CSS polish masquerading as a feature
.reviewlooprc Config File (2/3 NO) — violates the zero-config principle prematurely
get_page_context MCP Tool (2/3 NO) — duplicates what agents can derive from list_annotations
- Notification Sounds (2/3 NO) — browser notifications are intrusive; status changes are sufficient signal
How to Use This Issue
React to features you want to see prioritised:
- 👍 = "I want this"
- 🚀 = "This would be transformative"
- 👎 = "This is bloat"
Comment with additional context, use cases, or alternative proposals. This issue serves as the living roadmap for Review Loop feature development.
Feature Roadmap — Brainstormed and Debated
This issue captures features brainstormed by three specialist agents (Reviewer/UX Advocate, Agent/Developer Advocate, Product Strategist) and then debated across all perspectives. Only features that survived critical scrutiny from all three viewpoints are listed here.
Each feature was voted on as STRONG YES, MAYBE, or NO by each agent. The consensus level is shown for transparency.
Tier 1 — Unanimous (3/3 STRONG YES)
These features were independently proposed AND unanimously endorsed by all three perspectives.
Inline Diff Preview for Addressed Annotations
Problem: When an annotation is marked "addressed", the reviewer sees a status badge and the agent's reply — but has no way to see what actually changed without manually inspecting the source. The
replacedTextfield exists but isn't surfaced visually.Proposal: Show a compact before/after inline diff in the panel for addressed annotations. For text annotations, diff
selectedTextvsreplacedText. Render red/green highlighting similar to GitHub's inline diff view. Add a "View Diff" toggle on addressed items.Why it survived: Every agent agreed this is the single biggest gap in the review-fix-verify cycle. Without it, "addressed" is just a trust badge. The data already exists (
selectedText+replacedText) — this is primarily a UI rendering task.Annotation Categories / Severity Tags
Problem: All annotations look identical regardless of importance. A "typo in footer" and a "broken auth flow" get equal visual weight. Reviewers can't signal priority, and agents can't triage.
Proposal: Optional category/severity dropdown on annotation creation. Short, fixed list (max 5):
Bug,Content,Style,A11y,Question. Display as coloured pills in the panel and on highlights. Expose via MCP for agent filtering/triage. Default is no category — preserving zero-friction creation.Why it survived: All three perspectives independently proposed this. Improves signal quality for both humans (visual scanning) and agents (intelligent triage). The key constraint: must stay strictly optional with a minimal taxonomy. No configurable categories, no full issue tracker.
Screenshot / Visual Attachment
Problem: Some feedback is inherently visual — "the spacing looks wrong", "this doesn't match the design", "the hover state is broken". Text-only notes can't capture visual intent. This is the biggest barrier to designer adoption.
Proposal: Allow reviewers to attach a visual reference to annotations. Scoped approach: a URL/link field for external images (paste a link to a screenshot hosted on Imgur, Figma, etc.) rather than full html2canvas capture with base64 blobs in JSON. Agents with multimodal capabilities receive the image URL via MCP.
Why it survived: The only feature proposed independently by ALL three agents. Unlocks the designer-to-developer workflow. The critical scoping decision: start with external URL references only (low complexity, no JSON bloat), not embedded screenshots. Can be enhanced later.
Tier 2 — Strong Majority (2/3 STRONG YES)
Features with strong support from two perspectives and no fundamental objections.
Source File Mapping (annotation to source file hint)
Problem: When an agent receives an annotation, it knows the page URL and selected text but has no idea which source file to edit. It must grep the entire codebase — expensive, slow, and error-prone for common strings. This is the number one pain point in the agent workflow.
Proposal: Add an optional
sourceHintfield populated by the Vite plugin at annotation creation time. Vite's module graph can map rendered DOM back to source components. The hint would contain{ filePath: "src/pages/about.astro", lineRange?: [42, 58] }. Exposed vialist_annotationsandstart_work.Why it survived: Both the agent and product perspectives ranked this number one. The reviewer perspective was neutral (doesn't directly help reviewers). High complexity but transformative impact — could ship Astro-only first to prove the concept.
Framework Adapter Init CLI (
npx review-loop init)Problem: Setting up Review Loop requires reading docs, choosing the right adapter, editing config files, and adding
.mcp.json. Each framework has different steps. This friction discourages adoption, especially for quick evaluations.Proposal: Interactive CLI that detects the framework (
astro.config.*,vite.config.*, etc.), generates correct integration code, adds.mcp.json, and optionally addsinline-review.jsonto.gitignore. Ships as a separate bin entry, not runtime code.Why it survived: First-run experience determines whether someone adopts a tool. The reviewer perspective didn't see value (it's not reviewer-facing), but agent and product perspectives both ranked it top 5. The "try it in 60 seconds" story is critical for growth.
Tier 3 — Supported (1/3 STRONG YES, in multiple top-8 lists)
Features with clear value but lower urgency or narrower scope.
Batch Accept/Reopen for Addressed Annotations
Problem: When an agent addresses 8+ annotations at once, the reviewer must click into each individually to accept or reopen — tedious and slow.
Proposal: "Review All (N)" button that steps through addressed annotations sequentially with Accept/Reopen/Skip actions, auto-scrolling to each highlight. Optional "Accept All" with confirmation for when the reviewer has already verified everything.
Why it survived: Directly addresses the reviewer bottleneck once an agent is productive. Two agents noted it's quality-of-life rather than essential, but the reviewer advocate made a compelling case that this becomes critical at scale.
Cmd+Enter to Submit Annotations
Problem: The annotation popup has no keyboard shortcut to submit. Reviewers must tab to Save or reach for the mouse — breaking flow on every single annotation.
Proposal:
Cmd+Enter(Mac) /Ctrl+Enter(Windows/Linux) to submit. Subtle hint below the textarea. Standard UX convention whose absence is surprising.Why it survived: Highest frequency-to-effort ratio. Every annotation creation hits this friction. Approximately 20 minutes to implement. All agents agreed it should be done; debate was only about priority relative to larger features.
create_annotationMCP Tool (agent-initiated annotations)Problem: Agents can only respond to annotations — they can't proactively flag issues. If an agent notices a problem while working (accessibility issue, broken link, inconsistency), it has no way to communicate back through Review Loop.
Proposal: New MCP tool that lets agents create annotations programmatically. Parameters:
pageUrl,note,type, optional selectors. Agent-created annotations appear in the panel with a distinct visual indicator. The REST API already supportsPOST /annotations— this is primarily wiring into MCP.Why it survived: Turns the one-way reviewer-to-agent flow into bidirectional communication. Near-zero implementation cost since the REST endpoint exists. The reviewer advocate noted the UI needs to handle "surprise" annotations appearing, and the product strategist flagged the mental model inversion as worth careful design.
Honourable Mentions — Interesting but Polarising
These features had strong support from one perspective but were rejected or questioned by others. Worth revisiting as the product matures.
Rejected Features
These were proposed but rejected by 2+ perspectives during debate:
.reviewlooprcConfig File (2/3 NO) — violates the zero-config principle prematurelyget_page_contextMCP Tool (2/3 NO) — duplicates what agents can derive fromlist_annotationsHow to Use This Issue
React to features you want to see prioritised:
Comment with additional context, use cases, or alternative proposals. This issue serves as the living roadmap for Review Loop feature development.