Skip to content

[codex] Add non-inline PR comments MCP tool#63

Open
petems wants to merge 4 commits intomasterfrom
petems/fetch-pr-comments-tool
Open

[codex] Add non-inline PR comments MCP tool#63
petems wants to merge 4 commits intomasterfrom
petems/fetch-pr-comments-tool

Conversation

@petems
Copy link
Copy Markdown
Owner

@petems petems commented Mar 4, 2026

This PR adds a new MCP tool, fetch_pr_non_inline_comments, to fetch pull request conversation comments from the Issues API (/issues/{pull_number}/comments) alongside existing inline review comment support.
It introduces validated Pydantic models and shared fetch-tool argument handling, plus markdown/JSON output paths and updated tool registration and docs.
The implementation includes comprehensive tests for model validation, tool listing/dispatch, output formatting, and argument passthrough, and the full quality pipeline passed (uv run ruff format ., uv run ruff check --fix ., uv run mypy ., make compile-check, uv run pytest).
This closes the gap where only inline review comments were available to MCP clients.

Summary by CodeRabbit

  • New Features

    • Added a public tool to fetch non-inline pull-request comments with pagination, retry controls, and selectable Markdown/JSON outputs.
  • Documentation

    • Added full parameter specs, return semantics, and Markdown/JSON examples for the new tool; reordered tool listing.
  • Models

    • Introduced public data types and argument schemas for non-inline comments and fetch options.
  • Tests

    • Added tests covering non-inline fetching, output validation, numeric overrides, and combined outputs.
  • Bug Fixes

    • Tightened pagination to respect max_comments precisely (partial final-page consumption).

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 4, 2026

📝 Walkthrough

Walkthrough

Adds a new fetch_pr_non_inline_comments tool and models (NonInlineCommentModel, FetchPRNonInlineCommentsArgs), a generic REST pagination/retry fetcher, non-inline Markdown/JSON rendering, shared tool-run helpers and server wiring, documentation updates, and tests covering the new behavior and pagination adjustments.

Changes

Cohort / File(s) Summary
Documentation
README.md, docs/reference/tools.md
Added public documentation for fetch_pr_non_inline_comments (params: pr_url, output, per_page, max_pages, max_comments, max_retries), examples, and reordered tool list to include the new entry.
Data Models
src/mcp_github_pr_review/models.py
Added NonInlineCommentModel and FetchPRNonInlineCommentsArgs; introduced _BaseFetchPRCommentsArgs and made FetchPRReviewCommentsArgs inherit from it.
Server Implementation
src/mcp_github_pr_review/server.py
Introduced _fetch_pr_comments_rest_generic, fetch_pr_non_inline_comments_rest, fetch_pr_non_inline_comments (async), generate_non_inline_markdown, _fence_for, _validate_fetch_tool_args, and _run_fetch_tool; wired and registered fetch_pr_non_inline_comments in tool list; updated imports, exports, logging, and error messages.
Tests
tests/test_mcp_server_tools.py, tests/test_models.py, tests/test_pagination_limits.py
Added tests for non-inline models, args, markdown generation, tool registration, invalid output handling, numeric overrides, combined outputs; adjusted pagination test to assert exact cap at max_comments.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Server
    participant GitHub
    participant Renderer

    Client->>Server: fetch_pr_non_inline_comments(pr_url, output, per_page, max_pages, ...)
    Server->>Server: _validate_fetch_tool_args(args)
    Server->>Server: resolve PR URL → owner/repo/pull_number
    loop fetch pages (up to max_pages / max_comments)
        Server->>GitHub: GET /repos/{owner}/{repo}/issues/{pull_number}/comments?per_page=...
        GitHub-->>Server: comments batch + headers
        Server->>Server: convert items via NonInlineCommentModel/from_rest
    end
    alt output = "markdown"
        Server->>Renderer: generate_non_inline_markdown(comments)
        Renderer-->>Server: markdown text
    else output = "json"
        Server->>Server: serialize comments to JSON
    else output = "both"
        Server->>Renderer: generate_non_inline_markdown(comments)
        Renderer-->>Server: markdown text
        Server->>Server: serialize comments to JSON
    end
    Server-->>Client: Return rendered markdown and/or JSON
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 I hop through diffs with a curious cheer,
I gather non-inline whispers far and near,
Pages curl and fences form in line,
I stitch JSON and Markdown, neat and fine,
A hoppity note: comments fetched on time 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 48.89% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely describes the primary change: adding a new MCP tool for fetching non-inline PR comments, which aligns with the substantial changes across documentation, models, server implementation, and tests.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch petems/fetch-pr-comments-tool

Comment @coderabbitai help to get the list of available commands and usage tips.

@petems petems marked this pull request as ready for review March 4, 2026 10:58
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
src/mcp_github_pr_review/models.py (1)

269-300: Consider extracting shared validator and fields to reduce duplication.

FetchPRNonInlineCommentsArgs is nearly identical to FetchPRReviewCommentsArgs (lines 228-267), including the duplicated _reject_bool_and_float validator. This violates DRY.

Consider extracting a base class or mixin:

♻️ Proposed refactor to reduce duplication
+class _BaseFetchCommentsArgs(BaseModel):
+    """Shared arguments for fetch comment tools."""
+
+    model_config = ConfigDict(
+        extra="forbid",
+        validate_assignment=True,
+    )
+
+    pr_url: str | None = None
+    output: Literal["markdown", "json", "both"] = "markdown"
+    per_page: int | None = Field(default=None, ge=1, le=100)
+    max_pages: int | None = Field(default=None, ge=1, le=200)
+    max_comments: int | None = Field(default=None, ge=100, le=100000)
+    max_retries: int | None = Field(default=None, ge=0, le=10)
+    owner: str | None = None
+    repo: str | None = None
+    branch: str | None = None
+    select_strategy: Literal["branch", "latest", "first", "error"] = "branch"
+
+    `@field_validator`(
+        "per_page", "max_pages", "max_comments", "max_retries", mode="before"
+    )
+    `@classmethod`
+    def _reject_bool_and_float(cls, v: Any) -> Any:
+        if v is None:
+            return v
+        if isinstance(v, bool):
+            raise ValueError("Invalid type: expected integer")
+        if isinstance(v, float):
+            raise ValueError("Invalid type: expected integer")
+        return v
+
+
+class FetchPRReviewCommentsArgs(_BaseFetchCommentsArgs):
+    """Arguments for the fetch_pr_review_comments MCP tool."""
+    pass
+
+
+class FetchPRNonInlineCommentsArgs(_BaseFetchCommentsArgs):
+    """Arguments for the fetch_pr_non_inline_comments MCP tool."""
+    pass
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/mcp_github_pr_review/models.py` around lines 269 - 300, Extract the
duplicated numeric-field validator and shared fields into a new base Pydantic
model (e.g., class FetchPRCommentsBase or a mixin) that defines model_config,
the `@field_validator` method _reject_bool_and_float, and the common fields:
output, per_page, max_pages, max_comments, max_retries, owner, repo, branch,
select_strategy; then change FetchPRReviewCommentsArgs and
FetchPRNonInlineCommentsArgs to inherit from that base and remove their local
copies of those fields and the validator so only PR-specific fields remain in
each subclass.
src/mcp_github_pr_review/server.py (1)

949-1124: Significant code duplication with fetch_pr_comments.

fetch_pr_non_inline_comments_rest duplicates ~150 lines from fetch_pr_comments (lines 734-946). The only differences are:

  1. API endpoint (/issues/{pull_number}/comments vs /pulls/{pull_number}/comments)
  2. Model conversion (NonInlineCommentModel.from_rest vs ReviewCommentModel.from_rest)
  3. Log messages

Consider extracting shared pagination/retry logic into a generic helper.

♻️ Refactor approach to reduce duplication
async def _fetch_pr_comments_rest_generic(
    owner: str,
    repo: str,
    pull_number: int,
    *,
    endpoint_template: str,  # e.g., "/issues/{pull}/comments" or "/pulls/{pull}/comments"
    model_from_rest: Callable[[dict], Any],
    context_name: str,
    host: str = "github.com",
    per_page: int | None = None,
    max_pages: int | None = None,
    max_comments: int | None = None,
    max_retries: int | None = None,
) -> list[CommentResult] | None:
    # Shared implementation with parameterized endpoint and model conversion
    ...

Then both fetch_pr_comments and fetch_pr_non_inline_comments_rest become thin wrappers.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/mcp_github_pr_review/server.py` around lines 949 - 1124, Both
fetch_pr_non_inline_comments_rest and fetch_pr_comments duplicate the same
pagination/retry/logging logic; extract that shared logic into a parameterized
helper (e.g. _fetch_pr_comments_rest_generic) that accepts endpoint_template
("/issues/{pull}/comments" or "/pulls/{pull}/comments"), model_from_rest
(NonInlineCommentModel.from_rest or ReviewCommentModel.from_rest), and
context_name for logs, while reusing RateLimitHandler, _retry_http_request,
timeout setup and safety limits; then make fetch_pr_non_inline_comments_rest and
fetch_pr_comments thin wrappers that build the base URL, pass the proper
model_from_rest and endpoint_template to the new helper and return its result.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/mcp_github_pr_review/models.py`:
- Around line 269-300: Extract the duplicated numeric-field validator and shared
fields into a new base Pydantic model (e.g., class FetchPRCommentsBase or a
mixin) that defines model_config, the `@field_validator` method
_reject_bool_and_float, and the common fields: output, per_page, max_pages,
max_comments, max_retries, owner, repo, branch, select_strategy; then change
FetchPRReviewCommentsArgs and FetchPRNonInlineCommentsArgs to inherit from that
base and remove their local copies of those fields and the validator so only
PR-specific fields remain in each subclass.

In `@src/mcp_github_pr_review/server.py`:
- Around line 949-1124: Both fetch_pr_non_inline_comments_rest and
fetch_pr_comments duplicate the same pagination/retry/logging logic; extract
that shared logic into a parameterized helper (e.g.
_fetch_pr_comments_rest_generic) that accepts endpoint_template
("/issues/{pull}/comments" or "/pulls/{pull}/comments"), model_from_rest
(NonInlineCommentModel.from_rest or ReviewCommentModel.from_rest), and
context_name for logs, while reusing RateLimitHandler, _retry_http_request,
timeout setup and safety limits; then make fetch_pr_non_inline_comments_rest and
fetch_pr_comments thin wrappers that build the base URL, pass the proper
model_from_rest and endpoint_template to the new helper and return its result.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 0b6d7445-15d2-4250-a06e-7ba022c3fc7a

📥 Commits

Reviewing files that changed from the base of the PR and between 524f5bf and da5cd9a.

📒 Files selected for processing (6)
  • README.md
  • docs/reference/tools.md
  • src/mcp_github_pr_review/models.py
  • src/mcp_github_pr_review/server.py
  • tests/test_mcp_server_tools.py
  • tests/test_models.py

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/mcp_github_pr_review/server.py (1)

873-923: ⚠️ Potential issue | 🟠 Major

Do not fail the fetch after a successful retry from 5xx.

A transient 5xx sets had_server_error = True, and that flag is never cleared. If a retry succeeds, the function still returns None, dropping valid results.

🔧 Proposed fix
@@
-            used_token_fallback = False
-            had_server_error = False
+            used_token_fallback = False
             rate_limit_handler = RateLimitHandler(context_name)
@@
                 async def handle_rest_status(
                     resp: httpx.Response, _attempt: int
                 ) -> str | None:
-                    nonlocal used_token_fallback, had_server_error
-
-                    if 500 <= resp.status_code < 600:
-                        had_server_error = True
+                    nonlocal used_token_fallback
@@
                     return await rate_limit_handler.handle_rate_limit(resp)
@@
-                if had_server_error:
-                    return None
-
                 page_comments = response.json()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/mcp_github_pr_review/server.py` around lines 873 - 923, The code sets
had_server_error=True inside handle_rest_status on any 5xx, but never clears it,
so a later successful retry is still treated as a failure; after the successful
call to _retry_http_request (the line assigning response from
_retry_http_request in the calling scope), reset had_server_error=False (or only
set had_server_error when a request finally fails) so that a successful response
from make_rest_request/_retry_http_request isn't discarded; update references in
handle_rest_status, the had_server_error variable, and the block immediately
after the await _retry_http_request to implement this fix.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/mcp_github_pr_review/server.py`:
- Around line 930-942: The loop currently appends an entire page (page_comments)
via comment_from_rest into all_comments and only checks max_comments_v after the
page, allowing the result to exceed the cap; update the ingestion to enforce
max_comments_v per-comment by checking len(all_comments) before each append
inside the for comment in page_comments loop (use a break out of the loop when
the cap is reached and stop further paging), referencing the variables
page_comments, all_comments, page_count, max_pages_v, and max_comments_v and
preserving the existing debug emit_debug_page_count logic.

---

Outside diff comments:
In `@src/mcp_github_pr_review/server.py`:
- Around line 873-923: The code sets had_server_error=True inside
handle_rest_status on any 5xx, but never clears it, so a later successful retry
is still treated as a failure; after the successful call to _retry_http_request
(the line assigning response from _retry_http_request in the calling scope),
reset had_server_error=False (or only set had_server_error when a request
finally fails) so that a successful response from
make_rest_request/_retry_http_request isn't discarded; update references in
handle_rest_status, the had_server_error variable, and the block immediately
after the await _retry_http_request to implement this fix.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ba0b2d7a-d89d-407f-b72e-7d8d0e701f70

📥 Commits

Reviewing files that changed from the base of the PR and between da5cd9a and bce10ea.

📒 Files selected for processing (2)
  • src/mcp_github_pr_review/models.py
  • src/mcp_github_pr_review/server.py

Comment thread src/mcp_github_pr_review/server.py
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
src/mcp_github_pr_review/server.py (1)

936-942: Use structured logging instead of debug print in pagination flow.

This debug output should go through logger.debug so tests/ops can capture it consistently.

♻️ Proposed refactor
-                if emit_debug_page_count:
-                    print(
-                        "DEBUG: page_count="
-                        f"{page_count}, MAX_PAGES={max_pages_v}, "
-                        f"comments_len={len(all_comments)}",
-                        file=sys.stderr,
-                    )
+                if emit_debug_page_count:
+                    logger.debug(
+                        "Pagination progress",
+                        extra={
+                            "page_count": page_count,
+                            "max_pages": max_pages_v,
+                            "comments_len": len(all_comments),
+                        },
+                    )
As per coding guidelines: Use `caplog` for testing logging behavior; use proper logging instead of print statements
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/mcp_github_pr_review/server.py` around lines 936 - 942, Replace the debug
print in the pagination branch that checks emit_debug_page_count with a
structured logger.debug call: instead of printing the concatenated string using
print(..., file=sys.stderr) refer to the existing logger and call logger.debug
with the same contextual fields (page_count, max_pages_v,
comments_len=len(all_comments)) so the message is captured by test/ops logging
(use logger.debug("page_count=%s MAX_PAGES=%s comments_len=%s", page_count,
max_pages_v, len(all_comments)) or equivalent structured formatting). Ensure you
update the code path where emit_debug_page_count is used and remove the
print/sys.stderr usage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/mcp_github_pr_review/server.py`:
- Around line 865-866: The current logic marks had_server_error = True on any
intermediate 5xx and then later returns None even if a subsequent retry
succeeded; change the flow so that transient 5xx do not mark the entire
operation as terminal: only set had_server_error (or return None) after
exhausting retries and receiving a final 5xx/failure. Concretely, in the
request/retry loop around RateLimitHandler and the had_server_error flag, stop
setting had_server_error on intermediate 5xx responses — instead record the
occurrence (e.g., last_server_error = response) but continue retries, and after
the loop decide to set had_server_error/return None only if no successful
response was ever received. Update any branches that currently return None (the
block around the final return) to consult whether a success occurred before
returning None.

In `@tests/test_pagination_limits.py`:
- Around line 80-84: Rewrite the two assertions to Ruff's preferred assert style
by putting the assertion expression and the message separated by a comma, and
wrap the message in parentheses for a multiline-friendly form; specifically
replace the current block with assertions like assert len(comments) == 100,
("Should cap returned comments at max_comments") and assert
len(mock_http_client.get_calls) == 2, ("Should not fetch a third page once limit
reached"), referencing the variables comments and mock_http_client.get_calls (no
extra parentheses around the expression itself).

---

Nitpick comments:
In `@src/mcp_github_pr_review/server.py`:
- Around line 936-942: Replace the debug print in the pagination branch that
checks emit_debug_page_count with a structured logger.debug call: instead of
printing the concatenated string using print(..., file=sys.stderr) refer to the
existing logger and call logger.debug with the same contextual fields
(page_count, max_pages_v, comments_len=len(all_comments)) so the message is
captured by test/ops logging (use logger.debug("page_count=%s MAX_PAGES=%s
comments_len=%s", page_count, max_pages_v, len(all_comments)) or equivalent
structured formatting). Ensure you update the code path where
emit_debug_page_count is used and remove the print/sys.stderr usage.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7d6b0bed-96fc-4a64-9233-fcd319849e21

📥 Commits

Reviewing files that changed from the base of the PR and between bce10ea and aec9183.

📒 Files selected for processing (2)
  • src/mcp_github_pr_review/server.py
  • tests/test_pagination_limits.py

Comment on lines 865 to +866
had_server_error = False
rate_limit_handler = RateLimitHandler("fetch_pr_comments")
rate_limit_handler = RateLimitHandler(context_name)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Transient 5xx retries are incorrectly treated as terminal failures.

Line 878 marks had_server_error = True on any intermediate 5xx. Even if a later retry succeeds, Line 921 still returns None, dropping valid results.

🔧 Proposed fix
-            had_server_error = False
             rate_limit_handler = RateLimitHandler(context_name)
             while url:
@@
-                    if 500 <= resp.status_code < 600:
-                        had_server_error = True
-
                     if (
                         resp.status_code == 401
                         and token
@@
-                if had_server_error:
-                    return None
-
                 page_comments = response.json()

Also applies to: 878-880, 921-923

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/mcp_github_pr_review/server.py` around lines 865 - 866, The current logic
marks had_server_error = True on any intermediate 5xx and then later returns
None even if a subsequent retry succeeded; change the flow so that transient 5xx
do not mark the entire operation as terminal: only set had_server_error (or
return None) after exhausting retries and receiving a final 5xx/failure.
Concretely, in the request/retry loop around RateLimitHandler and the
had_server_error flag, stop setting had_server_error on intermediate 5xx
responses — instead record the occurrence (e.g., last_server_error = response)
but continue retries, and after the loop decide to set had_server_error/return
None only if no successful response was ever received. Update any branches that
currently return None (the block around the final return) to consult whether a
success occurred before returning None.

Comment on lines +80 to +84
# Page 1: 60, Page 2: only 40 ingested -> total 100, then stop
assert len(comments) == 100, "Should cap returned comments at max_comments"
assert (
len(mock_http_client.get_calls) == 2
), "Should not fetch a third page once limit reached"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Ruff format check is failing on this assertion block.

CI already reports formatting failure; this block should be rewritten in Ruff’s preferred multiline assert style.

💡 Proposed formatting fix
-    assert (
-        len(mock_http_client.get_calls) == 2
-    ), "Should not fetch a third page once limit reached"
+    assert len(mock_http_client.get_calls) == 2, (
+        "Should not fetch a third page once limit reached"
+    )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_pagination_limits.py` around lines 80 - 84, Rewrite the two
assertions to Ruff's preferred assert style by putting the assertion expression
and the message separated by a comma, and wrap the message in parentheses for a
multiline-friendly form; specifically replace the current block with assertions
like assert len(comments) == 100, ("Should cap returned comments at
max_comments") and assert len(mock_http_client.get_calls) == 2, ("Should not
fetch a third page once limit reached"), referencing the variables comments and
mock_http_client.get_calls (no extra parentheses around the expression itself).

Preserve both the PR's fetch_pr_non_inline_comments_rest function and
master's resolve_pr_review_thread function. Merge docstring references
and keep both new tests.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/mcp_github_pr_review/server.py (1)

873-874: ⚠️ Potential issue | 🟠 Major

Transient 5xx retries still treated as terminal failures.

The had_server_error = True flag is set on any intermediate 5xx response (line 886-887), but the retry helper _retry_http_request may succeed on a subsequent attempt. However, line 929-930 checks if had_server_error: return None after receiving a successful response, discarding valid results.

This was flagged in a previous review and appears unresolved. Consider only setting the flag (or returning None) after exhausting retries without success:

🔧 Proposed fix
                 async def handle_rest_status(
                     resp: httpx.Response, _attempt: int
                 ) -> str | None:
-                    nonlocal used_token_fallback, had_server_error
-
-                    if 500 <= resp.status_code < 600:
-                        had_server_error = True
+                    nonlocal used_token_fallback

                     if (
                         resp.status_code == 401

And remove the had_server_error check after receiving a successful response:

-                if had_server_error:
-                    return None
-
                 page_comments = response.json()

The _retry_http_request helper already handles 5xx retries with exponential backoff and will raise HTTPStatusError if retries are exhausted, which is caught on lines 924-927.

Also applies to: 886-887, 929-930

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/mcp_github_pr_review/server.py` around lines 873 - 874, The current logic
sets had_server_error = True on any intermediate 5xx and then discards a later
successful response; instead only mark/return a terminal server error after
retries are exhausted. Update the code around RateLimitHandler and
_retry_http_request so you do NOT set had_server_error when receiving an
intermediate 5xx inside the retry helper; remove the post-success check "if
had_server_error: return None" so a successful response from _retry_http_request
is honored, and set had_server_error (or return None) only in the
except/HTTPStatusError branch where _retry_http_request has failed after all
retries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/mcp_github_pr_review/server.py`:
- Around line 873-874: The current logic sets had_server_error = True on any
intermediate 5xx and then discards a later successful response; instead only
mark/return a terminal server error after retries are exhausted. Update the code
around RateLimitHandler and _retry_http_request so you do NOT set
had_server_error when receiving an intermediate 5xx inside the retry helper;
remove the post-success check "if had_server_error: return None" so a successful
response from _retry_http_request is honored, and set had_server_error (or
return None) only in the except/HTTPStatusError branch where _retry_http_request
has failed after all retries.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7cd1c1cc-6595-442f-96e3-cc3d0798271d

📥 Commits

Reviewing files that changed from the base of the PR and between aec9183 and 5bf9e57.

📒 Files selected for processing (6)
  • README.md
  • docs/reference/tools.md
  • src/mcp_github_pr_review/models.py
  • src/mcp_github_pr_review/server.py
  • tests/test_mcp_server_tools.py
  • tests/test_models.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • README.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant