Skip to content

feat(mcp): add evaluate_responses tool for questionnaire quality assessment#11

Merged
HerrLoesch merged 2 commits intomainfrom
copilot/create-evaluate-responses-function
Mar 17, 2026
Merged

feat(mcp): add evaluate_responses tool for questionnaire quality assessment#11
HerrLoesch merged 2 commits intomainfrom
copilot/create-evaluate-responses-function

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Mar 17, 2026

Adds a new evaluate_responses MCP tool that quantifies the response quality of a single questionnaire, returning consistency, completeness, and a list of detected anomalies.

New tool: evaluate_responses

Input: questionnaire_id (string — ID or name)

Output JSON:

{
  "consistency_score": 0.85,
  "completeness_%": 92.5,
  "warnings": [
    "Missing mandatory metadata field: 'productName'.",
    "Technology 'React' appears with inconsistent statuses across the questionnaire: Adopt, Hold.",
    "Technology 'Redis' has status 'Hold' in entry 'Backend Caching' without an explanatory comment."
  ]
}

Scoring logic

  • completeness_%(filled metadata fields + entries with ≥1 answer) / (6 mandatory metadata fields + total entries) × 100
  • consistency_score — fraction of unique technology names whose status is uniform across all their appearances in the questionnaire (0.01.0)

Detected warnings

Condition Warning emitted
Null/empty mandatory metadata field Missing field name
Entry with zero answers Entry ID + category
Same technology listed multiple times in one entry Duplicate or conflicting statuses
Same technology with different statuses across entries Cross-entry conflict
Hold/Retire status with no comment Missing justification

Changes

  • WorkspaceQueryModels.cs — new EvaluateResponsesResult record
  • ProjectRepository.cs — new EvaluateResponses(questionnaireId) method
  • McpSessionManager.cs — tool registered in tools/list, dispatched in BuildToolCallResponseAsync, handler BuildEvaluateResponsesResponse

🔒 GitHub Advanced Security automatically protects Copilot coding agent pull requests. You can protect all pull requests by enabling Advanced Security for your repositories. Learn more about Advanced Security.

…essment

- Adds EvaluateResponsesResult model (ConsistencyScore, CompletenessPercentage, Warnings)
- Adds EvaluateResponses(questionnaireId) method to ProjectRepository
  - Completeness: ratio of filled mandatory metadata fields + answered entries (0-100 %)
  - Consistency: ratio of technologies with a single status across all appearances (0.0-1.0)
  - Warnings: missing metadata, unanswered entries, duplicate/conflicting statuses,
    Hold/Retire answers without explanatory comments
- Registers evaluate_responses in McpSessionManager tool list and dispatch handler
- Returns JSON object: { consistency_score, completeness_%, warnings }

Co-authored-by: HerrLoesch <655110+HerrLoesch@users.noreply.github.com>
Copilot AI changed the title [WIP] Add evaluate_responses function for response quality assessment feat(mcp): add evaluate_responses tool for questionnaire quality assessment Mar 17, 2026
Copilot AI requested a review from HerrLoesch March 17, 2026 14:56
@HerrLoesch HerrLoesch marked this pull request as ready for review March 17, 2026 14:58
@HerrLoesch HerrLoesch merged commit 2b26c75 into main Mar 17, 2026
1 check passed
@HerrLoesch HerrLoesch deleted the copilot/create-evaluate-responses-function branch March 17, 2026 15:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants