Skip to content

[codex] personalize ranking and expose candidate sources#9

Closed
Felix3322 wants to merge 1 commit intoscukeqi:mainfrom
Felix3322:codex/personalize-ranking-pr
Closed

[codex] personalize ranking and expose candidate sources#9
Felix3322 wants to merge 1 commit intoscukeqi:mainfrom
Felix3322:codex/personalize-ranking-pr

Conversation

@Felix3322
Copy link
Copy Markdown

Summary

This PR extracts the local commit feat(llm): personalize ranking and expose candidate sources into its own standalone branch and ports it cleanly onto main.

It improves candidate ranking and source handling for the LLM path by introducing a reusable user-preference hint derived from context history, then threading that hint through the OpenAI-compatible, HF constraint, and llama.cpp providers.

What changes

1. Personalization hint from recent history

  • extend context history with a lightweight preference-hint mechanism
  • use recent committed text to summarize user-preferred wording / phrasing patterns
  • pass this preference hint into candidate prediction rather than relying on raw context only

2. Provider-side ranking improvements

  • update LLMProvider interface to accept an optional preference_hint
  • propagate the hint through:
    • OpenAICompatibleProvider
    • HFConstraintProvider
    • LlamaCppProvider
  • adjust prompts / request construction so the model can prefer candidates closer to the user's recent style when appropriate, without overriding current context/input constraints

3. Candidate source visibility and UI ordering

  • update RimeWithWeasel candidate handling logic so LLM and native candidates are merged and displayed in a more intentional order
  • preserve / expose source-aware behavior when selecting or highlighting candidates
  • improve candidate display consistency in LLM prediction mode

Why

Before this change, candidate ranking primarily depended on the immediate context and backend output order. That made the system less adaptive to the user's own recent wording patterns and also made mixed-source candidate display less expressive.

This PR moves toward more personalized ranking while keeping the integration local to the existing LLM prediction flow.

Validation

Built in a clean worktree (to avoid mixing in unrelated local modifications) using real MSVC tooling.

Validated with:

  • get-rime.ps1 -use dev
  • MSVC x64 build of these targets:
    • WeaselIPC
    • WeaselUI
    • RimeWithWeasel
    • WeaselIPCServer
    • WeaselServer

Build completed successfully with warnings only.

Notes

This PR is intentionally separate from the OpenAI extra-request-params work in PR #7.

@Felix3322
Copy link
Copy Markdown
Author

Superseded by #10.

PR #10 contains the current combined local working state and should be used as the active review target going forward. Closing this older split PR to avoid fragmented review.

@Felix3322
Copy link
Copy Markdown
Author

Closed as superseded by #10.

@Felix3322 Felix3322 closed this Mar 19, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant