feat: add get_release_schedule tool for direct milestone lookup#16
feat: add get_release_schedule tool for direct milestone lookup#16jeremyeder merged 5 commits intomainfrom
Conversation
Queries both release_schedule and release_milestone tables with fuzzy matching on product, version, and milestone type. Returns past and future dates with days_away annotation. Eliminates the need for schema discovery when asking release date questions. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Milestone entries from release_milestone table were missing status (past/upcoming) and days_away annotations that schedule entries already had, creating an asymmetry for LLM consumers. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add lookback_days parameter (default 30) so milestones that just passed are still visible. Previously, a code freeze from yesterday would silently disappear from results. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
WalkthroughAdded a new MCP tool Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@mcp_server.py`:
- Around line 522-527: The SQL ORDER BY on raw TEXT (date_start/event_date)
yields incorrect ordering for mixed date formats; after fetching rows from the
query that returns columns like release, task AS milestone, date_start,
date_finish (and similar queries at the other noted blocks), remove the ORDER BY
clause and instead sort the fetched list in Python using the existing
_parse_date() helper (e.g. rows.sort(key=lambda r:
_parse_date(r.get('date_start') or r.get('event_date')) or datetime.min)) so
dates are normalized and correctly ordered; apply the same change to the other
query sites (the blocks around the referenced lines) and ensure you handle
None/parse failures consistently.
In `@scripts/test.sh`:
- Around line 79-85: The smoke test assertion in the Python one-liner that calls
get_release_schedule() is too strict by requiring r.get('schedule_count', 0) >
0; change it to accept either schedule rows or milestone-only results by
asserting (r.get('schedule_count', 0) > 0) or (r.get('milestone_count', 0) > 0).
Update both occurrences that run the Python check (the block using
mcp_server.get_release_schedule()) so the new assertion checks schedule_count OR
milestone_count and keep the same error message or adjust it to reflect "no
schedule or milestone rows".
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro Plus
Run ID: 81b8ae8f-ed22-4ac5-9d15-df7576e3e407
📒 Files selected for processing (2)
mcp_server.pyscripts/test.sh
| sched_where = " AND ".join(sched_conditions) if sched_conditions else "1=1" | ||
| sched_rows = conn.execute( | ||
| f"""SELECT release, task AS milestone, date_start, date_finish | ||
| FROM release_schedule | ||
| WHERE {sched_where} | ||
| ORDER BY date_start""", | ||
| sched_params, | ||
| ).fetchall() |
There was a problem hiding this comment.
Unbounded result sets can degrade tool latency and memory use.
With no filters, both queries become WHERE 1=1 and return full tables without any cap. This tool should enforce a bounded response like other endpoints that use MAX_QUERY_ROWS.
Proposed fix
- sched_rows = conn.execute(
+ sched_rows = conn.execute(
f"""SELECT release, task AS milestone, date_start, date_finish
FROM release_schedule
WHERE {sched_where}
- ORDER BY date_start""",
- sched_params,
+ ORDER BY date_start
+ LIMIT ?""",
+ [*sched_params, MAX_QUERY_ROWS],
).fetchall()
@@
- mile_rows = conn.execute(
+ mile_rows = conn.execute(
f"""SELECT product, version, event_type AS milestone, event_date
FROM release_milestone
WHERE {mile_where}
- ORDER BY event_date""",
- mile_params,
+ ORDER BY event_date
+ LIMIT ?""",
+ [*mile_params, MAX_QUERY_ROWS],
).fetchall()As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
Also applies to: 542-549
| f"""SELECT release, task AS milestone, date_start, date_finish | ||
| FROM release_schedule | ||
| WHERE {sched_where} | ||
| ORDER BY date_start""", | ||
| sched_params, | ||
| ).fetchall() |
There was a problem hiding this comment.
Date ordering is currently incorrect for mixed date formats.
ORDER BY date_start/event_date sorts raw TEXT, but this file already acknowledges non-ISO date formats via _parse_date(). That can return misordered results while the tool promises date ordering.
Proposed fix
- sched_rows = conn.execute(
- f"""SELECT release, task AS milestone, date_start, date_finish
- FROM release_schedule
- WHERE {sched_where}
- ORDER BY date_start""",
- sched_params,
- ).fetchall()
+ sched_rows = conn.execute(
+ f"""SELECT release, task AS milestone, date_start, date_finish
+ FROM release_schedule
+ WHERE {sched_where}""",
+ sched_params,
+ ).fetchall()
@@
- mile_rows = conn.execute(
- f"""SELECT product, version, event_type AS milestone, event_date
- FROM release_milestone
- WHERE {mile_where}
- ORDER BY event_date""",
- mile_params,
- ).fetchall()
+ mile_rows = conn.execute(
+ f"""SELECT product, version, event_type AS milestone, event_date
+ FROM release_milestone
+ WHERE {mile_where}""",
+ mile_params,
+ ).fetchall()
@@
for entry in schedule:
d = entry.get("date_finish") or entry.get("date_start")
+ parsed = _parse_date(d) if d else None
+ entry["_sort_date"] = parsed
- if d:
- parsed = _parse_date(d)
- if parsed:
- entry["status"] = "past" if parsed < today else "upcoming"
- entry["days_away"] = (parsed - today).days
+ if parsed:
+ entry["status"] = "past" if parsed < today else "upcoming"
+ entry["days_away"] = (parsed - today).days
@@
for entry in milestones_list:
d = entry.get("event_date")
- if d:
- parsed = _parse_date(d)
- if parsed:
- entry["status"] = "past" if parsed < today else "upcoming"
- entry["days_away"] = (parsed - today).days
+ parsed = _parse_date(d) if d else None
+ entry["_sort_date"] = parsed
+ if parsed:
+ entry["status"] = "past" if parsed < today else "upcoming"
+ entry["days_away"] = (parsed - today).days
+
+ schedule.sort(key=lambda e: e.get("_sort_date") or date.max)
+ milestones_list.sort(key=lambda e: e.get("_sort_date") or date.max)
+ for e in schedule:
+ e.pop("_sort_date", None)
+ for e in milestones_list:
+ e.pop("_sort_date", None)As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
Also applies to: 543-549, 554-570
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@mcp_server.py` around lines 522 - 527, The SQL ORDER BY on raw TEXT
(date_start/event_date) yields incorrect ordering for mixed date formats; after
fetching rows from the query that returns columns like release, task AS
milestone, date_start, date_finish (and similar queries at the other noted
blocks), remove the ORDER BY clause and instead sort the fetched list in Python
using the existing _parse_date() helper (e.g. rows.sort(key=lambda r:
_parse_date(r.get('date_start') or r.get('event_date')) or datetime.min)) so
dates are normalized and correctly ordered; apply the same change to the other
query sites (the blocks around the referenced lines) and ensure you handle
None/parse failures consistently.
| run "get_release_schedule(all)" uv run python3 -c " | ||
| import json, sys | ||
| sys.path.insert(0, '$REPO_ROOT') | ||
| import mcp_server | ||
| r = json.loads(mcp_server.get_release_schedule()) | ||
| assert r.get('schedule_count', 0) > 0, 'no schedule rows' | ||
| " |
There was a problem hiding this comment.
Smoke tests are too strict: they ignore milestone-only matches.
Both tests require schedule_count > 0, but get_release_schedule() is valid when only milestone_count is non-zero. This can create false CI failures.
Proposed fix
-r = json.loads(mcp_server.get_release_schedule())
-assert r.get('schedule_count', 0) > 0, 'no schedule rows'
+r = json.loads(mcp_server.get_release_schedule())
+assert (r.get('schedule_count', 0) + r.get('milestone_count', 0)) > 0, 'no release rows'
@@
-r = json.loads(mcp_server.get_release_schedule(product='Acme', version='1.0', milestone='freeze'))
-assert r.get('schedule_count', 0) > 0, 'no filtered rows'
+r = json.loads(mcp_server.get_release_schedule(product='Acme', version='1.0', milestone='freeze'))
+assert (r.get('schedule_count', 0) + r.get('milestone_count', 0)) > 0, 'no filtered rows'As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
Also applies to: 87-93
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/test.sh` around lines 79 - 85, The smoke test assertion in the Python
one-liner that calls get_release_schedule() is too strict by requiring
r.get('schedule_count', 0) > 0; change it to accept either schedule rows or
milestone-only results by asserting (r.get('schedule_count', 0) > 0) or
(r.get('milestone_count', 0) > 0). Update both occurrences that run the Python
check (the block using mcp_server.get_release_schedule()) so the new assertion
checks schedule_count OR milestone_count and keep the same error message or
adjust it to reflect "no schedule or milestone rows".
Summary
get_release_scheduletool with fuzzy matching on product, version, and milestone type — queries bothrelease_scheduleandrelease_milestonetables, returns past and future dates with status/days_away annotationsrelease_risk_summaryto include milestones from the past 30 days vialookback_daysparameter (was silently dropping all past milestones)scripts/test.shProblem
Natural-language questions like "when is code freeze for AcmeProduct 1.0?" required 3-4 round trips:
release_risk_summarysilently returned nothing (milestone was in the past)sqlite_masterNow it's one call:
get_release_schedule(product="acme", version="1.0", milestone="code freeze")Test plan
scripts/test.shpasses (16/16, includes 4 new tool smoke tests)get_release_schedule()with no filters returns all milestonesget_release_schedule(product="Acme", version="1.0", milestone="code freeze")returns the correct dateget_release_schedule(product="nonexistent")returns error with available releases hintrelease_risk_summary()now includes milestones from the past 30 daysuv run mcp_server.py)ruff checkandruff format --checkpass🤖 Generated with Claude Code