Skip to content

Fix cache .set() kwargs usage#725

Open
RohanExploit wants to merge 1 commit intomainfrom
fix-cache-set-kwargs-17410130062223211849
Open

Fix cache .set() kwargs usage#725
RohanExploit wants to merge 1 commit intomainfrom
fix-cache-set-kwargs-17410130062223211849

Conversation

@RohanExploit
Copy link
Copy Markdown
Owner

@RohanExploit RohanExploit commented May 3, 2026

Fix ThreadSafeCache.set calls across the codebase to correctly use explicit keyword arguments (data= and key=), preventing potential bugs and ensuring correct cache population.


PR created automatically by Jules for task 17410130062223211849 started by @RohanExploit


Summary by cubic

Fixed all ThreadSafeCache.set calls to use explicit data= and key= kwargs, preventing swapped arguments and broken cache entries. This stabilizes caching for recent issues, stats/leaderboard responses, and upload limits.

  • Bug Fixes
    • Standardized cache writes in backend/main_fixed.py, backend/routers/utility.py, and backend/utils.py.
    • Updated unit tests to use the new signature and verify expiration/eviction behavior.

Written for commit a156c23. Summary will update on new commits.

Summary by CodeRabbit

  • Refactor
    • Standardized internal cache function call signatures across multiple modules for improved code consistency and maintainability.

`ThreadSafeCache.set` signature changed to `def set(self, data: Any, key: str = "default")` in previous updates to fix bugs, meaning positional arguments caused the `key` to be passed as `data` and vice-versa, or throw type errors in the previous implementation.

This commit refactors all usage of cache `.set` to explicitly pass `data=...` and `key=...` keyword arguments to avoid bugs and ensure future-proof maintainability.
Copilot AI review requested due to automatic review settings May 3, 2026 11:16
@google-labs-jules
Copy link
Copy Markdown
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@netlify
Copy link
Copy Markdown

netlify Bot commented May 3, 2026

Deploy Preview for fixmybharat canceled.

Name Link
🔨 Latest commit a156c23
🔍 Latest deploy log https://app.netlify.com/projects/fixmybharat/deploys/69f72e924041630008edc69c

@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 3, 2026

🙏 Thank you for your contribution, @RohanExploit!

PR Details:

Quality Checklist:
Please ensure your PR meets the following criteria:

  • Code follows the project's style guidelines
  • Self-review of code completed
  • Code is commented where necessary
  • Documentation updated (if applicable)
  • No new warnings generated
  • Tests added/updated (if applicable)
  • All tests passing locally
  • No breaking changes to existing functionality

Review Process:

  1. Automated checks will run on your code
  2. A maintainer will review your changes
  3. Address any requested changes promptly
  4. Once approved, your PR will be merged! 🎉

Note: The maintainers will monitor code quality and ensure the overall project flow isn't broken.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 3, 2026

📝 Walkthrough

Walkthrough

This pull request standardizes the cache API call signature across the backend codebase, changing cache.set() invocations from positional arguments to keyword arguments (data=, key=) in utility functions, routes, and tests.

Changes

Cache API Signature Standardization

Layer / File(s) Summary
Core Cache Calls
backend/main_fixed.py, backend/routers/utility.py, backend/utils.py
Cache set calls updated from positional arguments to keyword arguments: set(data, key)set(data=data, key=key) across issue creation, response caching, and upload limit tracking.
Tests
backend/tests/test_cache_unit.py
Cache API calls in unit tests updated to use keyword arguments across test_cache_set_get, test_cache_expiration, test_cache_lru_eviction, test_cache_cleanup_logic, and test_cache_ordered_cleanup.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Poem

🐰 With keywords so clear, our caches now shine,
No more mystery in args—each position's defined,
From routers to utils, the pattern stands bright,
Standardized calls make the code feel so right!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 27.27% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: standardizing ThreadSafeCache.set() calls to use keyword arguments.
Description check ✅ Passed The description covers the purpose, impact, and scope, but lacks some template sections like Type of Change checkbox selection and explicit Testing Done verification.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix-cache-set-kwargs-17410130062223211849

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 0/1 reviews remaining, refill in 60 minutes.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR standardizes a few ThreadSafeCache.set() call sites to use explicit keyword arguments (data= / key=) to align with the cache API and reduce ambiguity.

Changes:

  • Update user_upload_cache.set(...) in upload-limit logic to pass data= and key= explicitly.
  • Update recent_issues_cache.set(...) in utility router endpoints to pass data= and key= explicitly.
  • Update unit tests to use keyword arguments when calling ThreadSafeCache.set(...).

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 1 comment.

File Description
backend/utils.py Uses explicit data=/key= when persisting upload timestamps into user_upload_cache.
backend/tests/test_cache_unit.py Updates cache unit tests to call set() with explicit keyword args.
backend/routers/utility.py Uses explicit data=/key= for caching serialized JSON responses.
backend/main_fixed.py Uses explicit data= for default-key cache writes in recent-issues caching paths.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

data = response.model_dump(mode='json')
json_data = json.dumps(data)
recent_issues_cache.set(json_data, "stats")
recent_issues_cache.set(data=json_data, key="stats")
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
backend/utils.py (1)

49-64: ⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift

Keyword-arg update is correct; pre-existing TOCTOU in the rate-limit check.

The get → filter → check → append → set sequence is not atomic. Two concurrent requests for the same identifier can both read the same current_uploads, both pass the limit check, both append, and both write back — effectively doubling the allowed burst. This was present before this PR, but worth addressing since the write side is the changed line.

A simple fix is to move the full compound operation inside the cache lock, or use an atomic increment structure (e.g., a threading.Lock per identifier).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/utils.py` around lines 49 - 64, The rate-limit check suffers a TOCTOU
race: the sequence user_upload_cache.get(identifier) → filter →
len(recent_uploads) check → append → user_upload_cache.set(...) must be
performed atomically; wrap the whole sequence (get, filter old timestamps,
compare against limit, append now, set) inside a per-identifier lock (e.g., a
threading.Lock stored in a locks dict keyed by identifier or using any existing
cache lock API) so concurrent requests for the same identifier cannot
interleave, and ensure you release the lock after updating; modify the code
around user_upload_cache.get/set and recent_uploads to acquire the lock before
reading and only perform the check+append+set while holding the lock.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@backend/utils.py`:
- Around line 49-64: The rate-limit check suffers a TOCTOU race: the sequence
user_upload_cache.get(identifier) → filter → len(recent_uploads) check → append
→ user_upload_cache.set(...) must be performed atomically; wrap the whole
sequence (get, filter old timestamps, compare against limit, append now, set)
inside a per-identifier lock (e.g., a threading.Lock stored in a locks dict
keyed by identifier or using any existing cache lock API) so concurrent requests
for the same identifier cannot interleave, and ensure you release the lock after
updating; modify the code around user_upload_cache.get/set and recent_uploads to
acquire the lock before reading and only perform the check+append+set while
holding the lock.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 72970858-9f99-4607-8a1d-6bcfa2384297

📥 Commits

Reviewing files that changed from the base of the PR and between f837f7b and a156c23.

📒 Files selected for processing (4)
  • backend/main_fixed.py
  • backend/routers/utility.py
  • backend/tests/test_cache_unit.py
  • backend/utils.py

Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 4 files

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants