Skip to content

[Feat][Router] Add per-model request latency histogram#940

Merged
ruizhang0101 merged 2 commits intovllm-project:mainfrom
banlor:feat/per-model-latency-histogram
May 6, 2026
Merged

[Feat][Router] Add per-model request latency histogram#940
ruizhang0101 merged 2 commits intovllm-project:mainfrom
banlor:feat/per-model-latency-histogram

Conversation

@banlor
Copy link
Copy Markdown
Contributor

@banlor banlor commented May 6, 2026

Follow-up to #813. Per-model latency histogram observed at the router, with a status label so errors don't pollute the success-path tail.

Named vllm:request_latency_seconds (not e2e_*) on purpose - the engine already exposes vllm:e2e_request_latency_seconds and clobbering it would break existing dashboards. The router-side version also picks up the router-to-engine hop, so the two are complementary.

Out of scope, still on #699: per-model request count + resource indicators. Will follow up.


  • Make sure the code changes pass the pre-commit checks.
  • Sign-off your commit by using -s when doing git commit
  • Try to classify PRs for easy understanding of the type of changes, such as [Bugfix], [Feat], and [CI].
Detailed Checklist (Click to Expand)

Thank you for your contribution to production-stack! Before submitting the pull request, please ensure the PR meets the following criteria. This helps us maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Please try to classify PRs for easy understanding of the type of changes. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Feat] for new features in the cluster (e.g., autoscaling, disaggregated prefill, etc.).
  • [Router] for changes to the vllm_router (e.g., routing algorithm, router observability, etc.).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • Pass all linter checks. Please use pre-commit to format your code. See README.md for installation.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Please include sufficient tests to ensure the change is stay correct and robust. This includes both unit tests and integration tests.

DCO and Signed-off-by

When contributing changes to this project, you must agree to the DCO. Commits must include a Signed-off-by: header which certifies agreement with the terms of the DCO.

Using -s with git commit will automatically add this header.

What to Expect for the Reviews

We aim to address all PRs in a timely manner. If no one reviews your PR within 5 days, please @-mention one of YuhanLiu11
, Shaoting-Feng or ApostaC.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new Prometheus histogram metric, request_latency_seconds, to track router-level request latency across different servers and models. The changes include the metric definition, its integration into the request processing flow for both successful and failed requests, and corresponding unit tests. However, a significant issue was identified where latency could be double-counted if an exception occurs during post-processing after a 'success' observation has already been recorded. Furthermore, the current implementation labels HTTP error responses (4xx/5xx) as 'success' if no Python exception is raised, which contradicts the goal of the metric. It is recommended to refactor the observation logic to ensure it executes exactly once per request and accurately reflects the response status.

Comment thread src/vllm_router/services/request_service/request.py Outdated
vllm:request_latency_seconds with server/model/status labels. observed
once per request in finally, so http 4xx/5xx and exceptions both land
as status=error.

Refs vllm-project#699.

Signed-off-by: Mikhail Basov <Michael.S.Sinclair@protonmail.com>
@banlor banlor force-pushed the feat/per-model-latency-histogram branch from af8d476 to f32e9c9 Compare May 6, 2026 11:29
@banlor
Copy link
Copy Markdown
Contributor Author

banlor commented May 6, 2026

fixed in latest push - observation moved to a single point in finally, and http 4xx/5xx now tag as error

Copy link
Copy Markdown
Collaborator

@ruizhang0101 ruizhang0101 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ruizhang0101 ruizhang0101 merged commit 67307bd into vllm-project:main May 6, 2026
15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants