Skip to content

fix: address cache consistency issues between L1 and L2#126

Merged
poyrazK merged 7 commits intomainfrom
fix/api-http-security-issues
May 2, 2026
Merged

fix: address cache consistency issues between L1 and L2#126
poyrazK merged 7 commits intomainfrom
fix/api-http-security-issues

Conversation

@poyrazK
Copy link
Copy Markdown
Owner

@poyrazK poyrazK commented Apr 30, 2026

Summary

Fix 5 cache consistency issues between L1 (in-memory) and L2 (Redis) cache layers in the DNS server.

# Issue
106 L1 cache returns direct reference to mutable byte slice
104 Cache Invalidate only publishes, never deletes from Redis L2
115 L1 cache TTL on L2 hit ignores remaining Redis TTL
107 TOCTOU race between L1/L2 cache check and population
105 Zone updates flush L1 cache but not L2 Redis cache

Changes

  • cache.go: Get() now returns a copy of the cached data, not the internal reference — prevents mutation corruption when handlePacket() rewrites the DNS transaction ID
  • redis.go: Invalidate() now deletes the key from Redis before publishing the invalidation event
  • redis.go: Added RemainingTTL() method to query Redis TTL for L1 population
  • server.go: Added sharded per-key locking (256 shards) around L1→L2 lookup-and-populate to close the TOCTOU race window
  • server.go: Zone updates via RFC 2136 now call Redis.Invalidate() to flush L2 and publish to all nodes

Test plan

  • go build ./... passes
  • go test ./internal/dns/server/... -race passes

- cache.go: return copy of byte slice from Get() to prevent mutation corruption (#106)
- redis.go: Invalidate() now deletes key from Redis before publishing (#104)
- redis.go: add RemainingTTL() method to get Redis TTL for L1 population (#115)
- server.go: add sharded per-key locking to close TOCTOU race window (#107)
- server.go: zone updates call Redis.Invalidate() to flush L2 and publish to all nodes (#105)
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 30, 2026

Warning

Rate limit exceeded

@poyrazK has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 24 minutes and 30 seconds before requesting another review.

To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 02935ebe-ba32-47e7-899a-015794a09e87

📥 Commits

Reviewing files that changed from the base of the PR and between 4eaa38a and 86f37f0.

📒 Files selected for processing (6)
  • internal/dns/server/cache.go
  • internal/dns/server/cache_test.go
  • internal/dns/server/redis.go
  • internal/dns/server/redis_test.go
  • internal/dns/server/server.go
  • internal/dns/server/server_internal_test.go
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/api-http-security-issues

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 0/1 reviews remaining, refill in 24 minutes and 30 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

poyrazK added 6 commits April 30, 2026 16:43
- cache.go: replace append([]byte(nil), ...) idiom with explicit make/copy
- server.go: add tryLock spin loop before blocking on cache locks
- Lock was held during potentially slow L3 (DB) resolution, serializing all requests sharing the same lock shard
- Restructure so lock is released before L3 resolution begins; use fromL2 flag to return early if L2 hit
- Remove tryLock spin loop (blocked indefinitely under contention without runtime spinning)
- Remove runtime import since spin loop was the only use
Copy link
Copy Markdown
Owner Author

@poyrazK poyrazK left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's okay to merge

@poyrazK poyrazK merged commit e140584 into main May 2, 2026
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant