Improving lookup for specification pre-loading at startup#174
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 51 minutes and 16 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughThe PR changes verifier cache lookup to build a per-function index once via Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@main.py`:
- Around line 540-547: The loop over cached_vresults[function] contains a
redundant condition comparing function != vresult.get_function(); remove that
check so the loop simply filters by vresult.status in statuses and then updates
highest_priority_vresult by comparing vresult_priority indices. Update the loop
in the block that iterates cached_vresults[function] (referencing
cached_vresults, vresult.get_function(), vresult.status, vresult_priority, and
highest_priority_vresult) to drop the extra function comparison and keep only
the status-based filtering and priority selection.
- Line 513: Change the parameter type of cached_vresults on
_get_cached_vresult_with_status from defaultdict[...] to a broader type (e.g.,
dict[CFunction, list[VerificationResult]] or Mapping[CFunction,
list[VerificationResult]]) and stop relying on indexing; use
cached_vresults.get(function, []) when reading entries. Update the function
signature to the chosen broader type and adjust any internal access to use .get
so the helper no longer depends on defaultdict's default-factory behavior,
making it testable with plain dicts or mappings.
In `@util/cache_util.py`:
- Around line 28-30: Replace the fragile subscript lookup vresult_cache[vinput]
with a safe get to avoid KeyError races: iterate over vresult_cache.iterkeys()
as before but call vresult_cache.get(vinput) and keep the existing walrus/truthy
check to skip missing entries; ensure you still append to function_to_vresults
using vresult.get_function() so behavior stays identical when the value exists.
- Around line 22-23: The docstring for the return/type description of
defaultdict[CFunction, list[VerificationResult]] is missing a trailing period;
update the text "Default value for keys is an empty list" in util/cache_util.py
(the docstring describing the defaultdict return/type) to end with a period so
it matches the punctuation style of the rest of the docstring.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 2ae394db-2cb4-49f6-9b20-25dabafd2e1e
📒 Files selected for processing (3)
main.pyutil/__init__.pyutil/cache_util.py
Due to DiskCache limitations, the lookup for setting previously-verified specifications from the verification result cache took
O(n * m), wherenis the number of functions in the graph, andmis the number of cached specifications (failing and passing).This took considerable time even with
n = 36andm = 1617.It now takes
O(m) + O(n).