Skip to content

feat: MCP tool parity — 6 new tools + fix schema refs#36

Merged
chitcommit merged 2 commits intomainfrom
feat/mcp-tool-parity
Mar 16, 2026
Merged

feat: MCP tool parity — 6 new tools + fix schema refs#36
chitcommit merged 2 commits intomainfrom
feat/mcp-tool-parity

Conversation

@chitcommit
Copy link
Contributor

@chitcommit chitcommit commented Mar 16, 2026

Summary

  • 6 new MCP tools: query_legal_deadlines, query_documents, get_payment_plan, query_revenue_sources, get_sync_status, trigger_sync — closing coverage gaps between REST API routes and MCP tool surface
  • Fixed get_schema_refs: Now returns all 29 endpoint groups and 16 db tables (was 6/12)
  • Tool count: 32 → 38, test assertion updated, CLAUDE.md updated

Test plan

  • TypeScript compiles clean (tsc --noEmit)
  • All 15 tests pass (vitest run)
  • Tool count assertion updated to toBe(38)
  • Verify new tools respond correctly in production MCP client

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Expanded tool catalog from 32 to 38 tools across multiple domains.
    • Added financial capabilities: revenue source analysis and payment plan retrieval.
    • Added legal domain: legal deadline tracking and queries.
    • Added document management: document queries with type and status filtering.
    • Added synchronization tools: sync status monitoring and manual trigger functionality.
  • Tests

    • Updated tool-list test to expect the new total of 38 tools.

Add query_legal_deadlines, query_documents, get_payment_plan,
query_revenue_sources, get_sync_status, trigger_sync to close
coverage gaps between REST API routes and MCP tool surface.

Fix get_schema_refs to list all 29 endpoint groups and 16 db tables
instead of the previous incomplete 6/12 hardcoded lists.

Total MCP tools: 32 → 38. All tests pass (15/15).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link

  1. @coderabbitai review
  2. @copilot review
  3. @codex review
  4. @claude review
    Adversarial review request: evaluate security, policy bypass paths, regression risk, and merge-gating bypass attempts.

@chatgpt-codex-connector
Copy link

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

@coderabbitai
Copy link

coderabbitai bot commented Mar 16, 2026

Caution

Review failed

Pull request was closed or merged during review

📝 Walkthrough

Walkthrough

The MCP server tool catalog grows from 32 to 38 tools by adding six new tools (query_legal_deadlines, query_documents, get_payment_plan, query_revenue_sources, get_sync_status, trigger_sync). Server-side handlers, SQL logic, and schema refs/endpoints were extended; tests and docs updated to reflect the new catalog.

Changes

Cohort / File(s) Summary
MCP Tool Catalog & Execution
src/routes/mcp.ts
Added six new tools to TOOLS and implemented executeTool handlers: query_legal_deadlines, query_documents, get_payment_plan, query_revenue_sources, get_sync_status, trigger_sync. Extended get_schema_refs endpoints and db_tables lists. New handlers include input normalization, SQL queries, limit capping, aggregations, JSON field parsing, and validation/insert for sync triggers.
Documentation
CLAUDE.md
Updated descriptive sections to reflect expanded tool set (32 → 38) and added the new domains/tools.
Tests
tests/mcp.test.ts
Adjusted MCP tools/list test to expect exactly 38 tools (replaced previous minimum-bound assertion).

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Client as Client
  participant Server as MCP Server
  participant DB as Database
  Client->>Server: request executeTool(toolName, input)
  Server->>Server: validate & normalize input
  alt query_* or get_* (read)
    Server->>DB: SELECT ... (filtered, limited, aggregated)
    DB-->>Server: rows/result
    Server->>Client: return count/data/aggregates
  else trigger_sync (write)
    Server->>Server: validate source against allowed list
    Server->>DB: INSERT INTO cc_sync_log (source, type, status, meta)
    DB-->>Server: insert confirmation
    Server->>Client: return acknowledgement { source }
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 I nibbled through schemas, found six new delights,

Legal, Docs, Sync — and revenue lights.
Queries hop, a payment plan in tow,
Syncs triggered, logs lined in a row.
Thirty-eight tools — a cheerful warren’s glow!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: adding 6 new MCP tools and fixing schema references, which matches the substantial additions visible across the modified files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/mcp-tool-parity
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/routes/mcp.ts`:
- Around line 853-872: The SQL SELECT in the 'get_payment_plan' case is
referencing a non-existent column revenue_summary from cc_payment_plans and then
attempts to JSON.parse it; remove revenue_summary from the SELECT list and
delete it from the JSON parsing loop that iterates over ['schedule', 'warnings',
'revenue_summary'] (adjust the loop to only include existing JSON columns like
'schedule' and 'warnings') so the plan object and subsequent logic no longer
expect revenue_summary.
- Around line 901-911: The INSERT in the 'trigger_sync' case of
src/routes/mcp.ts omits the required sync_type column for cc_sync_log; modify
the INSERT INTO cc_sync_log statement inside the case 'trigger_sync' block to
include the sync_type column and set it to 'manual' (matching other inserts such
as in src/routes/sync.ts), so the VALUES list includes 'manual' for sync_type to
avoid the NOT NULL constraint violation.
- Around line 821-836: The SQL references a non-existent d.dispute_id column;
update the query in the 'query_legal_deadlines' case to read dispute_id from the
metadata JSONB instead of a missing column: SELECT (d.metadata->>'dispute_id')
AS dispute_id (or cast to UUID if needed), change the LEFT JOIN to use ON
(d.metadata->>'dispute_id')::uuid = disp.id (or compare strings if disp.id is
text), and update the WHERE clause to use (${disputeId}::text IS NULL OR
(d.metadata->>'dispute_id') = ${disputeId}) so the filtering uses the JSONB
value; keep the existing status handling and limit logic (symbols: case
'query_legal_deadlines', d.metadata, cc_legal_deadlines, cc_disputes, status,
disputeId, rows).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 05db0aea-08d3-40a6-bb3f-7c08125e9114

📥 Commits

Reviewing files that changed from the base of the PR and between d2187fa and 81fcea3.

📒 Files selected for processing (3)
  • CLAUDE.md
  • src/routes/mcp.ts
  • tests/mcp.test.ts

- query_legal_deadlines: use metadata->>'dispute_id' (no column exists)
- get_payment_plan: remove non-existent revenue_summary column
- trigger_sync: include required sync_type column in INSERT

Addresses CodeRabbit review findings on PR #36.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link

  1. @coderabbitai review
  2. @copilot review
  3. @codex review
  4. @claude review
    Adversarial review request: evaluate security, policy bypass paths, regression risk, and merge-gating bypass attempts.

@coderabbitai
Copy link

coderabbitai bot commented Mar 16, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Read the full mcp.ts file to perform adversarial review
fd -e ts mcp.ts src/ | head -5
cat src/routes/mcp.ts

Length of output: 46853


🏁 Script executed:

#!/bin/bash
# Read the test file
fd -e ts mcp.test.ts | head -5
cat tests/mcp.test.ts

Length of output: 11149


🏁 Script executed:

#!/bin/bash
# Check schema migrations / db init files for cc_sync_log id type
fd -e sql -e ts -e js migration | head -20
fd -e sql | head -20
# Look for cc_sync_log table definition anywhere
rg -n "cc_sync_log" --type-add 'sql:*.sql' -t sql -t ts -t js -A5 -B2

Length of output: 8963


@chitcommit — adversarial review of PR #36 follows. Six issues were identified ranging from critical correctness bug to medium-severity security gaps.


Adversarial Review — PR #36 (feat/mcp-tool-parity)

🔴 CRITICAL — get_sync_status: MAX(id) on a UUID primary key returns wrong results

cc_sync_log.id is confirmed to be UUID PRIMARY KEY DEFAULT gen_random_uuid() (migration 0003, src/db/schema.ts Line 286). UUIDs are random — MAX(uuid) returns the lexicographically largest UUID, not the most recent row. This means get_sync_status silently returns an arbitrary row per source, not the latest sync.

The correct query already exists in src/routes/sync.ts (Line 13–14):

SELECT DISTINCT ON (source) source, sync_type, status, ...
FROM cc_sync_log ORDER BY source, started_at DESC

Fix get_sync_status to match:

case 'get_sync_status': {
  const rows = await sql`
    SELECT DISTINCT ON (source) source, status, records_synced, started_at, completed_at, error_message
    FROM cc_sync_log
    ORDER BY source, started_at DESC
  `;
  return { sources: rows };
}

🔴 HIGH — trigger_sync: phantom log entry, no actual sync dispatched

The REST handler at src/routes/sync.ts inserts status='started', calls real dispatcher functions, and updates the log to 'completed'/'error'. The MCP trigger_sync only inserts status='triggered', records_synced=0 and never dispatches anything. This:

  1. Pollutes cc_sync_log with zombie rows that are never updated.
  2. Causes get_sync_status to surface 'triggered' as the "latest" status for a source that may have since synced cleanly via REST/cron.
  3. The success message "Sync triggered for {source}" is false — no sync occurs.

Either delegate to the real sync dispatcher (similar to the REST route), or rename/describe the tool as log_sync_request and update its description accordingly. The tool as written is a misleading state mutation with no effect.


🔴 HIGH — Scopes never enforced on write operations

caller.scopes is passed into executeTool but is never read anywhere in the 900-line switch block. Write-capable tools — trigger_sync, update_task_status, verify_task, ledger_record_custody, ledger_create_case_for_dispute, ledger_link_case_for_dispute — are fully accessible to any authenticated MCP client, regardless of their authorized scope set. This is a missing authorization enforcement layer. At minimum, write tools should gate on an explicit scope check:

// Example guard for mutating tools
const WRITE_TOOLS = new Set(['trigger_sync', 'update_task_status', 'verify_task', 'ledger_record_custody', ...]);
if (WRITE_TOOLS.has(toolName) && !caller.scopes.includes('write')) {
  throw new Error('Insufficient scope for write operation');
}

🟡 MEDIUM — query_legal_deadlines: unguarded ::uuid cast causes hard DB error

LEFT JOIN cc_disputes disp ON (d.metadata->>'dispute_id')::uuid = disp.id

If any row in cc_legal_deadlines has a metadata.dispute_id that is not a valid UUID (e.g., "N/A", "", or any free-form string), PostgreSQL raises invalid_text_representation and the entire tool call fails with isError:true. The tool becomes permanently broken for all callers until the corrupted row is manually fixed. Use NULLIF + a safe cast or filter upstream:

LEFT JOIN cc_disputes disp
  ON d.metadata->>'dispute_id' ~ '^[0-9a-f\-]{36}$'
 AND (d.metadata->>'dispute_id')::uuid = disp.id

🟡 MEDIUM — trigger_sync: inputSchema description/server allowlist mismatch

The inputSchema description lists "plaid, finance, ledger, scrape, mercury" (5 sources). The server-side validSources array allows ['plaid', 'finance', 'ledger', 'scrape', 'mercury', 'books', 'assets'] (7 sources). AI clients materializing tool calls from the schema will not know books and assets are valid. Either add them to the schema description, or add a formal enum to the inputSchema:

source: {
  type: 'string',
  description: 'Source to sync',
  enum: ['plaid', 'finance', 'ledger', 'scrape', 'mercury', 'books', 'assets'],
},

🟡 MEDIUM — ledger_record_custody: performedBy hardcoded, breaks audit trail

const payload = { evidenceId, action, performedBy: 'mcp-client', ... };

caller.userId is available. Using a hardcoded string makes custody records non-attributable and defeats the purpose of the audit log. Replace with caller.userId.


🟠 REGRESSION RISK — toBe(38) is a fragile merge gate

Changing from toBeGreaterThanOrEqual(28) to toBe(38) means any future tool addition breaks CI without a descriptive failure. Consider:

expect(tools.length).toBeGreaterThanOrEqual(38); // floor, not ceiling

Or, add a separate snapshot/catalog parity test that also asserts every TOOLS entry has a matching executeTool switch case — currently a tool with no handler silently returns isError:true with zero test coverage for that path.


Summary table

# Severity Finding
1 🔴 Critical get_sync_statusMAX(uuid) returns random row, not latest
2 🔴 High trigger_sync — phantom log entry, no sync dispatched, misleading response
3 🔴 High caller.scopes never enforced; all write tools bypass scope checks
4 🟡 Medium query_legal_deadlines — unguarded ::uuid cast corrupts tool on bad data
5 🟡 Medium trigger_sync schema/server allowlist mismatch (books, assets undocumented)
6 🟡 Medium ledger_record_custodyperformedBy hardcoded, audit trail broken
🟠 Risk toBe(38) exact count is a brittle merge gate; no handler-parity test

Items 1–3 are merge blockers per repository policy. Items 4–6 should be addressed before merge given the legal/financial data sensitivity of this service.


🧠 Learnings used
Learnt from: CR
Repo: chittyos/chittycommand PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-04T04:42:57.380Z
Learning: Resolve must-fix review comments (security, correctness, compliance, merge blockers) before merge

Learnt from: CR
Repo: chittyos/chittycommand PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-03-09T16:02:20.597Z
Learning: Applies to src/routes/mcp.ts : Implement MCP server for Codex integration with 28 tools across 8 domains (Core meta, Financial, Ledger, Connect, ChittyChat, Schema, Cert, Register) in `src/routes/mcp.ts`, returning structured JSON using MCP `content: [{ type: "json", json: ... }]` format where applicable

@chitcommit chitcommit merged commit cda56a1 into main Mar 16, 2026
14 of 15 checks passed
@chitcommit chitcommit deleted the feat/mcp-tool-parity branch March 16, 2026 12:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant