Skip to content

[Feat][Router] Add Priority routing#852

Open
ikaadil wants to merge 9 commits intovllm-project:mainfrom
ikaadil:feat/priority-routing
Open

[Feat][Router] Add Priority routing#852
ikaadil wants to merge 9 commits intovllm-project:mainfrom
ikaadil:feat/priority-routing

Conversation

@ikaadil
Copy link
Copy Markdown
Contributor

@ikaadil ikaadil commented Feb 26, 2026

Adds a new priority routing option that routes requests to the backend with the lowest QPS (least loaded).

Addresses the Q3 roadmap #640

Changes

  • PriorityRouter: New router that uses existing _qps_routing() to select the least-loaded endpoint.
  • CLI / config: --routing-logic priority and priority in dynamic config and Helm.
  • Docs: Router README, Helm README, and semantic-router tutorial updated.
  • Tests: Unit tests for PriorityRouter (lowest QPS, no stats, equal QPS).
  • Edge case: PriorityRouter raises ValueError when given no endpoints.

Usage

--routing-logic priority

BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE


  • Make sure the code changes pass the pre-commit checks.
  • Sign-off your commit by using -s when doing git commit
  • Try to classify PRs for easy understanding of the type of changes, such as [Bugfix], [Feat], and [CI].
Detailed Checklist (Click to Expand)

Thank you for your contribution to production-stack! Before submitting the pull request, please ensure the PR meets the following criteria. This helps us maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Please try to classify PRs for easy understanding of the type of changes. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Feat] for new features in the cluster (e.g., autoscaling, disaggregated prefill, etc.).
  • [Router] for changes to the vllm_router (e.g., routing algorithm, router observability, etc.).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • Pass all linter checks. Please use pre-commit to format your code. See README.md for installation.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Please include sufficient tests to ensure the change is stay correct and robust. This includes both unit tests and integration tests.

DCO and Signed-off-by

When contributing changes to this project, you must agree to the DCO. Commits must include a Signed-off-by: header which certifies agreement with the terms of the DCO.

Using -s with git commit will automatically add this header.

What to Expect for the Reviews

We aim to address all PRs in a timely manner. If no one reviews your PR within 5 days, please @-mention one of YuhanLiu11
, Shaoting-Feng or ApostaC.

- Implemented PriorityRouter to route requests to the endpoint with the lowest QPS (least loaded).
- Updated routing logic options to include 'priority', 'kvaware', 'prefixaware', and 'disaggregated_prefill'.
- Enhanced README files to reflect new routing options and their descriptions.
- Added unit tests for the PriorityRouter to ensure correct functionality.

This change enhances the routing capabilities of the system, allowing for more efficient load balancing based on endpoint performance.

Signed-off-by: Ifta Khairul Alam Adil <ikaadil007@gmail.com>
Signed-off-by: Ifta Khairul Alam Adil <ikaadil007@gmail.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @ikaadil, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant enhancement to the router's load balancing capabilities by implementing a new 'priority' routing logic. This logic intelligently distributes requests to the least-loaded backend servers based on their current Queries Per Second (QPS), aiming to optimize resource utilization and improve overall system responsiveness. The change includes the core implementation of the new router, its integration into the existing routing framework, and comprehensive updates to documentation and configuration options to make it accessible and understandable.

Highlights

  • New Routing Logic: Priority Routing: A new PriorityRouter has been introduced, which routes incoming requests to the backend endpoint with the lowest Queries Per Second (QPS), effectively directing traffic to the least loaded server. This enhances load balancing by prioritizing available capacity.
  • Configuration and Documentation Updates: The new 'priority' routing logic has been integrated into the router's configuration options, command-line arguments, and dynamic configuration. Documentation across the Helm chart README, router README, and a tutorial has been updated to reflect this new capability.
  • Unit Tests for Priority Router: Dedicated unit tests have been added for the PriorityRouter to ensure its correct functionality, covering scenarios like routing to the lowest QPS endpoint, handling endpoints with no existing stats, and consistent behavior when QPS values are equal.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • helm/README.md
    • Expanded the list of supported routing logic options for routerSpec.routingLogic.
  • src/tests/test_priority_router.py
    • Introduced unit tests for the PriorityRouter to validate its load-balancing behavior.
  • src/vllm_router/README.md
    • Updated the list of supported routing algorithms to include priority routing.
    • Extended the --routing-logic command-line option with new routing types.
    • Modified the dynamic configuration routing_logic field to support additional routing strategies.
  • src/vllm_router/parsers/parser.py
    • Added 'priority' as a valid choice for the --routing-logic command-line argument.
  • src/vllm_router/routers/routing_logic.py
    • Defined the PriorityRouter class for least-loaded routing.
    • Extended the RoutingLogic enum to include the new 'priority' option.
    • Integrated PriorityRouter into the router's initialization, retrieval, and cleanup mechanisms.
  • tutorials/24-semantic-router-integration.md
    • Updated the tutorial to reflect the addition of priority (least-loaded) routing.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new priority-based routing logic that directs requests to the least loaded endpoint based on its QPS. The changes are well-structured, including the core routing logic, corresponding tests, and updates to documentation. I've identified a potential issue where an empty list of endpoints could cause a runtime error, and I've also suggested a couple of minor improvements for the new test file to enhance its robustness and readability.

Comment thread src/vllm_router/routers/routing_logic.py Outdated
Comment thread src/tests/test_priority_router.py Outdated
Comment thread src/tests/test_priority_router.py Outdated
ikaadil and others added 3 commits February 26, 2026 13:00
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Ifta khairul Alam Adil <25082512+ikaadil@users.noreply.github.com>
Updated the PriorityRouter to raise a ValueError when no available endpoints are found after attempting to route based on QPS. This change improves error handling and ensures that routing failures are explicitly communicated.

Signed-off-by: Ifta Khairul Alam Adil <ikaadil007@gmail.com>
Modified the test case to assert that the routed URL is specifically http://a instead of checking for multiple valid URLs. This change ensures that the routing logic is tested for a precise outcome, improving the accuracy of the test.

Signed-off-by: Ifta Khairul Alam Adil <ikaadil007@gmail.com>
Copy link
Copy Markdown
Collaborator

@ruizhang0101 ruizhang0101 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, thanks for the PR. I think the qps routing is semantically different from the priority routing. In terms of the priority routing, could you elaborate the use case of this routing given that vllm itself has priority mechanism.

@ikaadil
Copy link
Copy Markdown
Contributor Author

ikaadil commented Feb 27, 2026

Hi, thanks for the PR. I think the qps routing is semantically different from the priority routing. In terms of the priority routing, could you elaborate the use case of this routing given that vllm itself has priority mechanism.

Thanks for the review. Here’s a short clarification on semantics and use case.

1. QPS vs “priority” (semantics)
I agree that what this implements is QPS-based / least-loaded routing (choose the backend with lowest QPS), not “priority” in the usual sense of (a) request priority (high vs low) or (b) endpoint tier (prefer A then overflow to B). So the current name priority is a bit overloaded. I’m happy to rename this option to something like least_loaded or load_based so it’s clear we mean “route to the least-loaded backend” and avoid confusion with vLLM’s request-level priority. If you have a preferred name, I can align with that.

2. Use case vs vLLM’s priority

  • vLLM’s priority lives inside a single backend: it decides which request runs next (FCFS, or priority scheduling when enabled), preemption, batching. It does not choose which backend gets the request.
  • This option lives at the router: when there are multiple vLLM backends (e.g. multiple pods), it chooses which backend gets the request. Right now it does that by picking the backend with lowest QPS (least loaded), so we’re doing load-aware placement across instances.

So they’re complementary: the router decides which instance gets the request (this PR: least-loaded by QPS); vLLM then decides order and priority of requests on that instance. The use case is multi-backend deployments where you want to avoid sending more traffic to already-busy instances (e.g. reduce queue depth and latency) instead of blindly round-robinning.

I can add this distinction and use case to the PR description and docs, and rename the option to least_loaded (or your preferred name) if that works for you.

@codesun8
Copy link
Copy Markdown

Hello, I think there is redundancy in the logic between this code and the session handling. During the session invocation process, even if no session ID is specified, it still goes through the _qps_routing method and routes based on QPS.

@codesun8
Copy link
Copy Markdown

I believe there is a lack of a routing strategy based on request count. If an abnormal request occurs, it can easily cause the pod to become blocked.

@nejch
Copy link
Copy Markdown
Contributor

nejch commented Mar 1, 2026

I think there's value in having something like this (essentially just an explicit QPS routing without session keys), simply because this is already the behavior of session-based routing when no session headers are used. I didn't actually know session routing falls back to this. But maybe there's a smarter way to ensure pods don't get blocked, not sure.

I believe there is a lack of a routing strategy based on request count. If an abnormal request occurs, it can easily cause the pod to become blocked.

@codesun8 this is already the case for session-based routing when going with the QPS fallback, no?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants