116 MCP tools for UK government IPA Gate Review assurance. Connects Claude to risk registers, earned value, benefits realisation, gate readiness, and IPA benchmarks.
-
Updated
Apr 21, 2026 - Python
116 MCP tools for UK government IPA Gate Review assurance. Connects Claude to risk registers, earned value, benefits realisation, gate readiness, and IPA benchmarks.
The team focused on standardising the capture and reporting of lessons learned from MOD Gateway Reviews by creating a structured lessons dataset and Power BI ingestion flow. Their work demonstrates how consistent data schemas, Microsoft Forms, and Power BI automation can turn assurance outputs into a repeatable, analysable Lessons Library.
Hack25 is a collaborative hackathon-style event focused on rapid experimentation, problem-solving and practical innovation across data, AI and modern digital tooling. Teams explore defined challenges, prototype solutions and share learnings within a short, delivery‑driven format.
The team designed a context-aware Lessons SME Agent that builds on an existing Lessons Library to deliver targeted, actionable insights from historic MOD Gateway Reviews. Their work shows how semantic retrieval and persona-driven prompts can surface the most relevant lessons and recommended actions for different roles and project phases.
The team developed a scalable Lessons Library pipeline that ingests historic MOD Gateway Review documents and converts them into a large, structured lessons dataset. Their solution focuses on high‑volume extraction, semantic classification, and sentiment analysis to rapidly surface reusable lessons for assurance and organisational learning.
PEAT Document Assessment System developed an interactive assurance evidence assessment solution that applies large language models to analyse project documentation, score maturity, and surface assurance evidence and gaps aligned to recognised governance frameworks.
MoD Assurance Assessment Build delivered a data-driven assurance assessment build that automates evaluation of project documents against GovS 002 criteria, producing structured ratings, scores, and commentary at scale.
Local RAG Assurance Engine delivered a fully local, offline-capable assurance analysis engine using retrieval‑augmented generation (RAG) to identify and surface evidence from project documentation and return structured, machine‑readable outputs.
AI PEAT Evidence Tool developed an AI-assisted PEAT-style evidence assessment tool that uses structured prompts to extract assurance evidence from project documents, apply RAG ratings, and generate auditable JSON outputs for reporting and dashboards.
Team 1B applied structured prompt engineering with Microsoft Copilot to automate assurance evidence identification and scoring across multiple personas, aligned to PEAT success criteria.
The team built an automated Lessons Learned Library that extracts recommendations and insights from historic MOD Gateway Review reports and turns them into a structured, searchable knowledge base. The solution combines document parsing, NLP categorisation, AI summarisation, and Power BI reporting to surface relevant lessons at project start-up a...
Evidence Query Assistant demonstrated a lightweight AI-assisted evidence query approach using ChatGPT to interrogate assurance documents against defined criteria and return clear, traceable answers identifying where evidence exists or is missing.
Add a description, image, and links to the project-assurance topic page so that developers can more easily learn about it.
To associate your repository with the project-assurance topic, visit your repo's landing page and select "manage topics."