Skip to content

🛡️ Sentinel: [CRITICAL] Fix SSRF vulnerability in JobParser#227

Open
anchapin wants to merge 1 commit intomainfrom
sentinel/fix-ssrf-job-parser-14763629633683345882
Open

🛡️ Sentinel: [CRITICAL] Fix SSRF vulnerability in JobParser#227
anchapin wants to merge 1 commit intomainfrom
sentinel/fix-ssrf-job-parser-14763629633683345882

Conversation

@anchapin
Copy link
Copy Markdown
Owner

@anchapin anchapin commented Apr 3, 2026

🚨 Severity: CRITICAL
💡 Vulnerability: The JobParser's parse_from_url method previously took user-supplied URLs and fetched them directly with requests.get(). This is a classic Server-Side Request Forgery (SSRF) vulnerability. It allowed arbitrary requests to be made from the server, potentially giving attackers access to internal networks, cloud metadata APIs, or other internal services.
🎯 Impact: An attacker could force the server to execute arbitrary GET requests.
🔧 Fix: Implemented a new _fetch_url_safe method that performs DNS resolution using socket.getaddrinfo, verifies the IP address does not belong to any internal/restricted ranges (is_private, is_loopback, is_link_local, is_reserved, is_multicast, is_unspecified), and explicitly follows redirects manually (allow_redirects=False) to ensure redirect targets are similarly validated.
✅ Verification: Ran pytest tests/test_job_parser_integration.py which passes all checks correctly without regressing on existing valid URL fetches.


PR created automatically by Jules for task 14763629633683345882 started by @anchapin

Summary by Sourcery

Harden JobParser URL fetching to prevent SSRF when parsing jobs from remote URLs.

Bug Fixes:

  • Prevent Server-Side Request Forgery by validating URL schemes, DNS-resolved IPs, and redirect targets before performing HTTP requests in JobParser.parse_from_url.

Enhancements:

  • Introduce a dedicated SSRFError exception for clearer error handling of unsafe or invalid URLs.
  • Improve error reporting for URL fetch failures by unifying network and SSRF-related errors into a single RuntimeError message.
  • Add an explicit dependency check for the requests library before attempting network operations in JobParser.

- Implemented `_fetch_url_safe` to validate IP addresses before fetching URLs
- Blocked private, loopback, link-local, reserved, and unspecified IPs
- Safely followed redirects to prevent bypasses
- Handled `requests` as an optional dependency securely

Co-authored-by: anchapin <6326294+anchapin@users.noreply.github.com>
@google-labs-jules
Copy link
Copy Markdown
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@sourcery-ai
Copy link
Copy Markdown

sourcery-ai bot commented Apr 3, 2026

Reviewer's Guide

Refactors JobParser URL fetching to route all network requests through a new SSRF-hardened helper that validates schemes, resolves and filters IPs, and manually validates redirects, while simplifying error handling and making the requests dependency check explicit.

Updated class diagram for JobParser SSRF hardening

classDiagram
    class SSRFError{
        <<exception>>
    }
    SSRFError --|> ValueError

    class JobParser{
        +parse_from_url(url str) JobDetails
        -_fetch_url_safe(url str) Any
        -_parse_html(html str) JobDetails
    }
Loading

File-Level Changes

Change Details Files
Route JobParser.parse_from_url network fetching through a new SSRF-safe helper and tighten error handling.
  • Replaced direct calls to requests.get() in parse_from_url with a call to the new _fetch_url_safe helper.
  • Moved the requests dependency check to occur before the try block and changed ImportError handling into a NotImplementedError when requests is missing.
  • Broadened the exception handler to catch both requests.RequestException and SSRFError and rewrap them as RuntimeError for callers.
cli/integrations/job_parser.py
Introduce explicit SSRF protection and redirect validation for URL fetching.
  • Added a custom SSRFError exception used to distinguish SSRF-related failures.
  • Implemented _fetch_url_safe which validates URL scheme and hostname, performs DNS resolution via socket.getaddrinfo, rejects private/loopback/link-local/reserved/multicast/unspecified IPs, and sets a User-Agent header.
  • Implemented manual redirect following (up to a maximum number of redirects) with allow_redirects=False, validating each redirect URL and its resolved IPs using the same SSRF checks, and raising SSRFError on invalid targets or excessive redirects.
cli/integrations/job_parser.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue, and left some high level feedback:

  • The IP validation logic for the initial URL and for redirect targets is nearly identical; consider extracting a shared helper (e.g. _validate_resolved_ips(hostname, context_label)) to reduce duplication and keep the SSRF policy consistent in one place.
  • Redirect handling currently relies on response.is_redirect and a Location header; you may want to explicitly check 3xx status codes and normalize/validate the redirect URL once to avoid edge cases (e.g., 304 or malformed Location headers) and make the redirect logic easier to follow.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The IP validation logic for the initial URL and for redirect targets is nearly identical; consider extracting a shared helper (e.g. `_validate_resolved_ips(hostname, context_label)`) to reduce duplication and keep the SSRF policy consistent in one place.
- Redirect handling currently relies on `response.is_redirect` and a `Location` header; you may want to explicitly check `3xx` status codes and normalize/validate the redirect URL once to avoid edge cases (e.g., 304 or malformed `Location` headers) and make the redirect logic easier to follow.

## Individual Comments

### Comment 1
<location path="cli/integrations/job_parser.py" line_range="249" />
<code_context>
-                "URL fetching requires 'requests' library. Install with: pip install requests"
-            )
-        except requests.RequestException as e:
+        except (requests.RequestException, SSRFError) as e:
             raise RuntimeError(f"Failed to fetch URL: {e}")

</code_context>
<issue_to_address>
**issue (bug_risk):** Referencing requests.RequestException when requests can be None may cause import-time failures.

Since `requests` may be set to `None` on import failure, `requests.RequestException` in the `except` tuple is evaluated at function definition time and will raise `AttributeError` before the runtime `requests is None` check can run. Consider importing `RequestException` into a separate name (with a fallback) or restructuring the `except` so it doesn’t dereference `requests` when it may be `None`.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

"URL fetching requires 'requests' library. Install with: pip install requests"
)
except requests.RequestException as e:
except (requests.RequestException, SSRFError) as e:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Referencing requests.RequestException when requests can be None may cause import-time failures.

Since requests may be set to None on import failure, requests.RequestException in the except tuple is evaluated at function definition time and will raise AttributeError before the runtime requests is None check can run. Consider importing RequestException into a separate name (with a fallback) or restructuring the except so it doesn’t dereference requests when it may be None.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant