Resolved bugs in streaks, Added Blog and Achievement section #190
Resolved bugs in streaks, Added Blog and Achievement section #190Arpit-Mahajan09 wants to merge 2 commits intoOpenLake:mainfrom
Conversation
WalkthroughThis PR introduces achievement tracking with tier-based progression badges, adds community blogs functionality, implements a streak context for cross-platform streak management, creates new UI components (Progress, Textarea), and integrates these features with new routes, navigation items, and enhanced heatmap streak visualization. Changes
Sequence DiagramssequenceDiagram
participant User
participant Achievements as Achievements Component
participant Backend as Backend API
participant Card as AchievementCard
User->>Achievements: View achievements page
activate Achievements
Achievements->>Backend: Fetch user stats (all platforms)
Backend-->>Achievements: stats object
Achievements->>Backend: Fetch unlocked achievements
Backend-->>Achievements: unlocked tiers list
Note over Achievements: Evaluate each tier vs stats
alt Tier requirement met & not unlocked
Achievements->>Backend: POST unlock request
Backend-->>Achievements: success
end
Achievements->>Card: Render with currentValue & unlockedTiers
Card-->>User: Display tier badges & progress
deactivate Achievements
sequenceDiagram
participant User
participant HomePage as HomePage Component
participant Fetcher as PlatformStreakFetcher
participant ExtAPI as External API<br/>(LeetCode/AtCoder)
participant StreakCtx as StreakContext
participant NavMenu as NavMenu
User->>HomePage: Load home page
activate HomePage
HomePage->>Fetcher: Render for each platform
activate Fetcher
Fetcher->>ExtAPI: Fetch user submissions
ExtAPI-->>Fetcher: submission data
Note over Fetcher: Calculate consecutive-day streak
Fetcher->>StreakCtx: updateStreak(platform, streak)
deactivate Fetcher
User->>NavMenu: View navbar
activate NavMenu
NavMenu->>StreakCtx: useStreak() hook
StreakCtx-->>NavMenu: global streak value
NavMenu-->>User: Display flame badge with streak
deactivate NavMenu
deactivate HomePage
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
@check-spelling-bot Report🔴 Please reviewSee the 📂 files view, the 📜action log, or 📝 job summary for details.Unrecognized words (610)Some files were automatically ignored 🙈These sample patterns would exclude them: You should consider excluding directory paths (e.g. You should consider adding them to: File matching is via Perl regular expressions. To check these files, more of their words need to be in the dictionary than not. You can use To accept these unrecognized words as correct and update file exclusions, you could run the following commands... in a clone of the git@github.com:Arpit-Mahajan09/Leaderboard-Pro.git repository curl -s -S -L 'https://raw.githubusercontent.com/check-spelling/check-spelling/main/apply.pl' |
perl - 'https://github.com/OpenLake/Leaderboard-Pro/actions/runs/22368724931/attempts/1' &&
git commit -m 'Update check-spelling metadata'Available 📚 dictionaries could cover words not in the 📘 dictionary
Consider adding them (in with:
extra_dictionaries: |
cspell:django/dict/django.txt
cspell:software-terms/dict/softwareTerms.txt
cspell:python/src/common/extra.txt
cspell:npm/dict/npm.txt
cspell:html/dict/html.txtTo stop checking additional dictionaries, add (in check_extra_dictionaries: ""Warnings
|
| Count | |
|---|---|
| 15 | |
| 1 | |
| 5 | |
| 10 |
See
If you see a bunch of garbage
If it relates to a ...
well-formed pattern
See if there's a pattern that would match it.
If not, try writing one and adding it to the patterns.txt file.
Patterns are Perl 5 Regular Expressions - you can test yours before committing to verify it will match your lines.
Note that patterns can't match multiline strings.
binary-ish string
Please add a file path to the excludes.txt file instead of just accepting the garbage.
File paths are Perl 5 Regular Expressions - you can test yours before committing to verify it will match your files.
^ refers to the file's path from the root of the repository, so ^README\.md$ would exclude README.md (on whichever branch you're using).
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
app/src/components/HomePage.jsx (1)
100-106:⚠️ Potential issue | 🟡 MinorStale
contributionsprop passed toHeatmap— no longer accepted.
Heatmap's updated signature is({ platform, username }); thecontributionsprop (line 103) is silently ignored.♻️ Proposed cleanup
<Heatmap platform={info.platform} - contributions={info.platform === 'github' ? (usernames.github?.contributions || 0) : 0} username={info.username} />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/components/HomePage.jsx` around lines 100 - 106, The Heatmap component no longer accepts a contributions prop, so remove the stale contributions prop passed in the HomePage JSX (the Heatmap usage where platform={info.platform}, contributions={...}, username={info.username}); update the component invocation to only pass the supported props (platform and username) so it matches Heatmap's new signature ({ platform, username }) and eliminate the ignored contributions expression.app/src/components/Heatmap.jsx (2)
142-188:⚠️ Potential issue | 🔴 CriticalGitHub streak is never computed —
calculateStreakis not called inprocessGitHubData.
processGitHubDatacallssetHeatmapData(heatmapArray)but never callscalculateStreak(heatmapArray). As a result:
- The
currentStreakstate stays at 0 for GitHub, so the flame badge always shows0.updateStreakis never called for'github', soStreakContextnever receives a non-zero GitHub streak andglobalStreakinNavMenuis permanently unaffected by GitHub activity.🐛 Proposed fix
setHeatmapData(heatmapArray); + calculateStreak(heatmapArray); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/components/Heatmap.jsx` around lines 142 - 188, processGitHubData builds heatmapArray but never computes or publishes the GitHub streak; after constructing heatmapArray (before or after setHeatmapData) call calculateStreak(heatmapArray) to get the streak value, then call updateStreak('github', streak) and setCurrentStreak(streak) (or the component state setter used for currentStreak) so the flame badge and StreakContext receive the computed GitHub streak; reference the processGitHubData function, calculateStreak, updateStreak and the currentStreak state setter when making the change.
43-43:⚠️ Potential issue | 🟡 MinorRemove debug
console.logstatements.Lines 43 (
"GitHub API Response:") and 186 ("Processed heatmap data:") are debug artifacts that will pollute the browser console in production.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/components/Heatmap.jsx` at line 43, Remove the debug console.log calls in the Heatmap.jsx component—specifically the console.log("GitHub API Response:", data) in the GitHub fetch handler and the console.log("Processed heatmap data:", ...) in the heatmap processing routine (e.g., inside functions like fetchGitHubData and processHeatmapData); either delete them or replace them with a development-only conditional (process.env.NODE_ENV === 'development') or a proper logger.debug call so they do not emit in production.app/src/components/NavMenu.jsx (1)
34-64:⚠️ Potential issue | 🟡 MinorFlame badge condition
globalStreak >= 0is always true — badge shows🔥 0on every page load before streaks resolve.
Math.max(streaks.codeforces || 0, streaks.github || 0)is always≥ 0, so the badge is unconditionally rendered. The StreakContext initializes both codeforces and github to 0, causing the badge to display immediately with a zero value before fetch requests complete. If the intent is to show it only once a real streak has been established, the guard should be> 0. If showing at zero is intentional, add a loading state to StreakContext so the badge doesn't display until data has been fetched or marked as ready.Additionally,
globalStreakis derived only fromcodeforcesandgithubstreaks.PlatformStreakFetcheralso supportsleetcodeandatcoder, and their streaks are stored in the context viaupdateStreak()but are not included in theglobalStreakcalculation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/components/NavMenu.jsx` around lines 34 - 64, The Flame badge is rendered unconditionally because globalStreak = Math.max(streaks.codeforces || 0, streaks.github || 0) is always >= 0; update the code in NavMenu.jsx to include all platforms and avoid showing zero by either computing globalStreak = Math.max(streaks.codeforces || 0, streaks.github || 0, streaks.leetcode || 0, streaks.atcoder || 0) and changing the render guard to globalStreak > 0, or add a loading/readiness flag in StreakContext (e.g., streaks.isLoaded set by PlatformStreakFetcher via updateStreak) and render the badge only when streaks.isLoaded && globalStreak > 0. Ensure you update the StreakContext init and PlatformStreakFetcher to set the loading flag when fetches complete if you choose the loading-state approach.
🧹 Nitpick comments (2)
app/src/components/Heatmap.jsx (1)
358-382:TooltipProvideris instantiated once per heatmap cell — move it outside the map.Rendering
<TooltipProvider>inside nested.map()calls creates 84+ provider instances per heatmap render.TooltipProvidershould wrap the entire heatmap grid once.♻️ Proposed refactor
- return ( - <div className="mt-3"> + return ( + <TooltipProvider delayDuration={100}> + <div className="mt-3"> ... <div className="flex gap-[2px]"> {heatmapData.map((week, weekIndex) => ( <div key={weekIndex} className="flex flex-col gap-[2px]"> {week.map((day, dayIndex) => { ... return ( - <TooltipProvider key={...} delayDuration={100}> <Tooltip> <TooltipTrigger asChild> <div ... /> </TooltipTrigger> <TooltipContent><p>{titleText}</p></TooltipContent> </Tooltip> - </TooltipProvider> )})} </div> ))} </div> ... </div> + </TooltipProvider> );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/components/Heatmap.jsx` around lines 358 - 382, The TooltipProvider is being created for every cell inside heatmapData.map; instead, move the TooltipProvider to wrap the entire grid (the outer element that maps weeks and days) so only one provider instance exists. Update Heatmap.jsx by removing TooltipProvider from inside the nested heatmapData.map loop and wrap the parent JSX that renders the weeks/days with a single <TooltipProvider> that contains the existing <Tooltip>, <TooltipTrigger>, and <TooltipContent> usage for each cell (keep per-cell keys on the cell wrapper or Tooltip/TooltipTrigger, e.g., using `${weekIndex}-${dayIndex}`) so the tooltip components still function without creating 84+ providers. Ensure getColorIntensity, getTooltipText, and streakCells logic remain unchanged and that Tooltip, TooltipTrigger, and TooltipContent remain as children of the single TooltipProvider.app/src/components/PlatformStreakFetcher.jsx (1)
12-73: No fetch timeout — third-party API calls hang indefinitely on slow/unresponsive endpoints.Both
alfa-leetcode-api.onrender.com(a community Render.com deployment that cold-starts) andkenkoooo.comare external services with no SLA. Without a timeout, a slow response blocks theuseEffectindefinitely andupdateStreakis never called, leaving the context stale.♻️ Suggested pattern using `AbortController`
const controller = new AbortController(); const timeoutId = setTimeout(() => controller.abort(), 10000); // 10s try { const response = await fetch(url, { signal: controller.signal }); ... } finally { clearTimeout(timeoutId); }Also applies to: 76-130
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/components/PlatformStreakFetcher.jsx` around lines 12 - 73, The fetchLeetcodeStreak function currently calls fetch without a timeout causing indefinite hangs; wrap the fetch in an AbortController with a timeout (e.g., 10s), pass controller.signal to fetch, clear the timeout in a finally block, and catch AbortError to call updateStreak(platform, 0) and log appropriately; apply the same AbortController/timeout pattern to the other platform fetchers referenced in the file (the block around lines 76-130) so all external requests use a bounded timeout.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api/test_cc.py`:
- Around line 17-18: The module-level calls to find_codechef_streak("tourist")
and find_codechef_streak("arpit-mahajan09") cause network I/O at import time;
move these calls into a runtime guard by wrapping them in an if __name__ ==
"__main__": block (i.e., place the two find_codechef_streak(...) invocations
inside that guard) so importing api.test_cc no longer triggers requests.get()
side effects.
- Around line 4-7: The function find_codechef_streak currently calls
requests.get without a timeout and without checking the response, and there are
network calls executed at import time; update find_codechef_streak to call
requests.get(url, timeout=...) and immediately call response.raise_for_status(),
wrap the request/parse in a try/except catching requests.RequestException (and
optionally logging or returning None/raising a clear error), and remove any
module-level invocation of find_codechef_streak so calls happen only inside a
test or an if __name__ == "__main__": block or from the test harness; use the
function name find_codechef_streak to locate where to add timeout,
raise_for_status(), and the try/except, and move any direct calls out of module
scope.
In `@app/src/components/Achievements.jsx`:
- Around line 8-10: The code incorrectly destructures globalStreak from
useStreak() even though the hook only provides streaks and updateStreak; fix by
computing globalStreak locally in Achievements.jsx from the returned streaks
(e.g., derive the max or sum depending on intended metric) instead of expecting
it from useStreak(), update any references that use globalStreak to use the
locally computed value, and keep useStreak() usage as const { streaks,
updateStreak } = useStreak() while adding a small helper or inline computation
(e.g., computeGlobalStreak(streaks) or const globalStreak = ...) to produce the
correct streak-based achievement values.
In `@app/src/components/Blogs.jsx`:
- Around line 86-96: The PUT request body in Blogs.jsx is missing the required
username field; update the JSON payload in the fetch call that currently sends
title, likes, and dislikes to also include username: currentUsername (use the
existing currentUsername variable), so the backend view that expects
request.data["username"] can find the user; locate the fetch block around the
PUT to /discussionpost/ and add username: currentUsername to the object being
JSON.stringify'd.
In `@app/src/components/Heatmap.jsx`:
- Around line 216-240: calculateStreak only skips a single trailing zero so
padded future zeros from processCodeforcesData cause real streaks to be cut;
change calculateStreak to first walk backward over all trailing zero-level cells
(the future padding) to find the last candidate index, then from that index
iterate backwards counting consecutive cells with level > 0 and adding their
indices to newStreakCells before calling setCurrentStreak and setStreakCells
(keep updateStreak(platform, streak) behavior). If you need the original "skip
only today" semantics instead, replace the blanket trailing-zero skip with a
date-aware check that only skips cells whose date >= today.
- Around line 360-382: Rendering computes flatIndex as weekIndex * 7 + dayIndex
which assumes every week has 7 days and causes misaligned streak highlighting;
change the flatIndex calculation in Heatmap.jsx (inside the weeks.map / week.map
render) to compute the cumulative offset using actual week lengths, e.g. offset
= weeks.slice(0, weekIndex).reduce((sum, w) => sum + w.length, 0) and then
flatIndex = offset + dayIndex so it matches the indices used by calculateStreak
and the streakCells Set.
In `@app/src/components/HomePage.jsx`:
- Around line 140-146: The call to PlatformStreakFetcher with
platform="codechef" in HomePage.jsx is dead because the PlatformStreakFetcher
component only handles "leetcode" and "atcoder"; either implement CodeChef
support inside PlatformStreakFetcher (add a branch in its useEffect to fetch
CodeChef streaks and update the same state/dispatch used for other platforms) or
remove the PlatformStreakFetcher invocation from HomePage.jsx until support is
added; locate PlatformStreakFetcher (the function/component) and update its
platform handling logic to include "codechef" (fetcher function, parsing, and
state update) or delete the HomePage.jsx line that renders
<PlatformStreakFetcher platform="codechef" … /> to avoid misleading dead code.
In `@app/src/components/PlatformStreakFetcher.jsx`:
- Around line 76-129: fetchAtcoderStreak is requesting the user's entire AtCoder
history via from_second=0 which can return MBs of data; change the fetch to set
from_second to roughly 90 days ago (compute fromSecond using Date.now()/1000
minus 90*24*60*60) so the request only returns recent submissions, keep the rest
of the logic (parsing epoch_second, creating daysWithSubmissions, computing
streak and calling updateStreak) unchanged and ensure you use the computed
fromSecond in the URL string for the fetch call.
- Around line 40-64: The loop in PlatformStreakFetcher.jsx uses raw epoch
timestamps (keys) and a fuzzy threshold (expectedDiff <= 1.5), which breaks when
multiple submissions occur the same day; normalize each timestamp to UTC/local
midnight, deduplicate to uniqueDays, sort descending (e.g., uniqueDays), then
iterate uniqueDays instead of keys and compute integer day differences using
MS_PER_DAY (24*60*60*1000) so you count same-day once and only treat consecutive
days when (prevDay - day) / MS_PER_DAY === 1; remove the 1.5 threshold and use
exact integer day comparisons (reference variables/functions: keys, streak,
expectedDiff, prevTimestamp — add helper toDay/uniqueDays).
- Line 7: The effect in PlatformStreakFetcher is triggering an infinite refetch
because updateStreak from StreakProvider is recreated every render; fix by
making updateStreak a stable callback in StreakContext (wrap the updateStreak
function in useCallback inside the StreakProvider so its reference is stable) or
alternatively remove updateStreak from the useEffect deps in
PlatformStreakFetcher and add an ESLint disable comment with a short
justification; target the updateStreak definition in
StreakContext/StreakProvider and the useEffect in PlatformStreakFetcher when
applying the change.
In `@app/src/components/ui/progress.jsx`:
- Around line 1-2: Import path for the Progress component is incorrect and the
`@radix-ui/react-progress` dependency is missing; add "@radix-ui/react-progress":
"^1.1.1" to package.json dependencies and update the import in
app/src/components/ui/progress.jsx to import Progress as ProgressPrimitive from
"@radix-ui/react-progress" (so the file uses the same ProgressPrimitive
identifier as the rest of the code).
- Around line 6-22: The Progress component currently extracts value and ignores
indicatorClassName; forward the value prop into ProgressPrimitive.Root (so Radix
can set aria-valuenow and render determinate progress) and accept an
indicatorClassName prop in the Progress signature, then merge that into the
ProgressPrimitive.Indicator className (preserving the existing "bg-primary
h-full w-full flex-1 transition-all" classes via cn). Update the function
signature (Progress) to include indicatorClassName and forward value to
ProgressPrimitive.Root while keeping className forwarded to the root so
AchievementCard's tier colors apply correctly.
In `@app/src/utils/achievements.js`:
- Around line 49-100: The cf_gladiator achievement description promises
"Participate in rated contests" but its evaluate function
(cf_gladiator.evaluate) currently returns stats?.codeforces?.total_solved, so it
unlocks by problems solved instead of contests; fix by either updating the
description to reference "total solved" or (preferred) change
cf_gladiator.evaluate to compute contest count from available contest history
fields such as stats?.codeforces?.rating_updates?.length or
stats?.codechef?.contests?.length (falling back to 0), and use that count to
compare against the tiers so the metric matches the description.
---
Outside diff comments:
In `@app/src/components/Heatmap.jsx`:
- Around line 142-188: processGitHubData builds heatmapArray but never computes
or publishes the GitHub streak; after constructing heatmapArray (before or after
setHeatmapData) call calculateStreak(heatmapArray) to get the streak value, then
call updateStreak('github', streak) and setCurrentStreak(streak) (or the
component state setter used for currentStreak) so the flame badge and
StreakContext receive the computed GitHub streak; reference the
processGitHubData function, calculateStreak, updateStreak and the currentStreak
state setter when making the change.
- Line 43: Remove the debug console.log calls in the Heatmap.jsx
component—specifically the console.log("GitHub API Response:", data) in the
GitHub fetch handler and the console.log("Processed heatmap data:", ...) in the
heatmap processing routine (e.g., inside functions like fetchGitHubData and
processHeatmapData); either delete them or replace them with a development-only
conditional (process.env.NODE_ENV === 'development') or a proper logger.debug
call so they do not emit in production.
In `@app/src/components/HomePage.jsx`:
- Around line 100-106: The Heatmap component no longer accepts a contributions
prop, so remove the stale contributions prop passed in the HomePage JSX (the
Heatmap usage where platform={info.platform}, contributions={...},
username={info.username}); update the component invocation to only pass the
supported props (platform and username) so it matches Heatmap's new signature ({
platform, username }) and eliminate the ignored contributions expression.
In `@app/src/components/NavMenu.jsx`:
- Around line 34-64: The Flame badge is rendered unconditionally because
globalStreak = Math.max(streaks.codeforces || 0, streaks.github || 0) is always
>= 0; update the code in NavMenu.jsx to include all platforms and avoid showing
zero by either computing globalStreak = Math.max(streaks.codeforces || 0,
streaks.github || 0, streaks.leetcode || 0, streaks.atcoder || 0) and changing
the render guard to globalStreak > 0, or add a loading/readiness flag in
StreakContext (e.g., streaks.isLoaded set by PlatformStreakFetcher via
updateStreak) and render the badge only when streaks.isLoaded && globalStreak >
0. Ensure you update the StreakContext init and PlatformStreakFetcher to set the
loading flag when fetches complete if you choose the loading-state approach.
---
Nitpick comments:
In `@app/src/components/Heatmap.jsx`:
- Around line 358-382: The TooltipProvider is being created for every cell
inside heatmapData.map; instead, move the TooltipProvider to wrap the entire
grid (the outer element that maps weeks and days) so only one provider instance
exists. Update Heatmap.jsx by removing TooltipProvider from inside the nested
heatmapData.map loop and wrap the parent JSX that renders the weeks/days with a
single <TooltipProvider> that contains the existing <Tooltip>, <TooltipTrigger>,
and <TooltipContent> usage for each cell (keep per-cell keys on the cell wrapper
or Tooltip/TooltipTrigger, e.g., using `${weekIndex}-${dayIndex}`) so the
tooltip components still function without creating 84+ providers. Ensure
getColorIntensity, getTooltipText, and streakCells logic remain unchanged and
that Tooltip, TooltipTrigger, and TooltipContent remain as children of the
single TooltipProvider.
In `@app/src/components/PlatformStreakFetcher.jsx`:
- Around line 12-73: The fetchLeetcodeStreak function currently calls fetch
without a timeout causing indefinite hangs; wrap the fetch in an AbortController
with a timeout (e.g., 10s), pass controller.signal to fetch, clear the timeout
in a finally block, and catch AbortError to call updateStreak(platform, 0) and
log appropriately; apply the same AbortController/timeout pattern to the other
platform fetchers referenced in the file (the block around lines 76-130) so all
external requests use a bounded timeout.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
api/test_cc.pyapp/src/App.jsxapp/src/Context/StreakContext.jsxapp/src/components/AchievementCard.jsxapp/src/components/Achievements.jsxapp/src/components/Blogs.jsxapp/src/components/Heatmap.jsxapp/src/components/HomePage.jsxapp/src/components/NavMenu.jsxapp/src/components/Navbar.jsxapp/src/components/PlatformStreakFetcher.jsxapp/src/components/ui/progress.jsxapp/src/components/ui/textarea.jsxapp/src/index.jsxapp/src/utils/achievements.js
| def find_codechef_streak(username): | ||
| url = f"https://www.codechef.com/users/{username}" | ||
| page = requests.get(url) | ||
| soup = BeautifulSoup(page.text, "html.parser") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check if file exists and examine the specific lines
ls -la api/test_cc.py 2>/dev/null && cat -n api/test_cc.py || echo "File not found"Repository: OpenLake/Leaderboard-Pro
Length of output: 778
🏁 Script executed:
# Search for all requests.get calls in the api directory
rg -n "requests\.get\(" api -g "*.py" -A 2Repository: OpenLake/Leaderboard-Pro
Length of output: 5600
🏁 Script executed:
# Also check for timeout usage in general to understand codebase patterns
rg -n "timeout=" api -g "*.py" | head -20Repository: OpenLake/Leaderboard-Pro
Length of output: 146
Add timeout and error handling for the external request, and move execution from module scope.
Line 6 uses requests.get without a timeout or status handling; this can hang test runs indefinitely and parse error pages as valid data. Additionally, lines 17–18 execute network calls at module import time, which will block during import.
Add timeout, raise_for_status(), and move the function calls into a proper test or main block:
🛠️ Proposed fix
- page = requests.get(url)
- soup = BeautifulSoup(page.text, "html.parser")
+ page = requests.get(url, timeout=10)
+ page.raise_for_status()
+ soup = BeautifulSoup(page.text, "html.parser")And wrap the function calls:
-find_codechef_streak("tourist")
-find_codechef_streak("arpit-mahajan09")
+if __name__ == "__main__":
+ find_codechef_streak("tourist")
+ find_codechef_streak("arpit-mahajan09")🧰 Tools
🪛 Ruff (0.15.2)
[error] 6-6: Probable use of requests call without timeout
(S113)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api/test_cc.py` around lines 4 - 7, The function find_codechef_streak
currently calls requests.get without a timeout and without checking the
response, and there are network calls executed at import time; update
find_codechef_streak to call requests.get(url, timeout=...) and immediately call
response.raise_for_status(), wrap the request/parse in a try/except catching
requests.RequestException (and optionally logging or returning None/raising a
clear error), and remove any module-level invocation of find_codechef_streak so
calls happen only inside a test or an if __name__ == "__main__": block or from
the test harness; use the function name find_codechef_streak to locate where to
add timeout, raise_for_status(), and the try/except, and move any direct calls
out of module scope.
| find_codechef_streak("tourist") | ||
| find_codechef_streak("arpit-mahajan09") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find and read the test_cc.py file to see the actual code
fd -t f "test_cc.py" -x cat -nRepository: OpenLake/Leaderboard-Pro
Length of output: 715
🏁 Script executed:
# Also search for the find_codechef_streak function definition to understand what it does
rg -n "def find_codechef_streak" -A 10Repository: OpenLake/Leaderboard-Pro
Length of output: 609
Remove network calls from module level.
Lines 17-18 execute requests.get() at import time via find_codechef_streak(), creating side effects and flaky failures. Wrap in a if __name__ == "__main__" guard:
Proposed fix
-find_codechef_streak("tourist")
-find_codechef_streak("arpit-mahajan09")
+if __name__ == "__main__":
+ find_codechef_streak("tourist")
+ find_codechef_streak("arpit-mahajan09")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api/test_cc.py` around lines 17 - 18, The module-level calls to
find_codechef_streak("tourist") and find_codechef_streak("arpit-mahajan09")
cause network I/O at import time; move these calls into a runtime guard by
wrapping them in an if __name__ == "__main__": block (i.e., place the two
find_codechef_streak(...) invocations inside that guard) so importing
api.test_cc no longer triggers requests.get() side effects.
| const { user } = useAuth(); | ||
| const { globalStreak } = useStreak(); | ||
| const [stats, setStats] = useState(null); |
There was a problem hiding this comment.
useStreak() doesn’t expose globalStreak — compute it from streaks.
Line 9 destructures globalStreak, but the context only provides { streaks, updateStreak }. This makes streak-based achievements evaluate to 0. Compute it locally or export globalStreak from the provider.
🛠️ Proposed fix
- const { globalStreak } = useStreak();
+ const { streaks } = useStreak();
+ const globalStreak = Math.max(0, ...(streaks ? Object.values(streaks) : []));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const { user } = useAuth(); | |
| const { globalStreak } = useStreak(); | |
| const [stats, setStats] = useState(null); | |
| const { user } = useAuth(); | |
| const { streaks } = useStreak(); | |
| const globalStreak = Math.max(0, ...(streaks ? Object.values(streaks) : [])); | |
| const [stats, setStats] = useState(null); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/Achievements.jsx` around lines 8 - 10, The code
incorrectly destructures globalStreak from useStreak() even though the hook only
provides streaks and updateStreak; fix by computing globalStreak locally in
Achievements.jsx from the returned streaks (e.g., derive the max or sum
depending on intended metric) instead of expecting it from useStreak(), update
any references that use globalStreak to use the locally computed value, and keep
useStreak() usage as const { streaks, updateStreak } = useStreak() while adding
a small helper or inline computation (e.g., computeGlobalStreak(streaks) or
const globalStreak = ...) to produce the correct streak-based achievement
values.
| const response = await fetch(`${BACKEND}/discussionpost/`, { | ||
| method: "PUT", | ||
| headers: { | ||
| "Content-Type": "application/json", | ||
| }, | ||
| body: JSON.stringify({ | ||
| title: post.title, | ||
| likes: updatedLikes, | ||
| dislikes: updatedDislikes, | ||
| }), | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check if Blogs.jsx file exists and has the problematic code
cat -n app/src/components/Blogs.jsx | sed -n '80,100p'Repository: OpenLake/Leaderboard-Pro
Length of output: 783
🏁 Script executed:
# Search for other API calls in Blogs.jsx to see how posts are identified elsewhere
rg "fetch.*discussionpost|post\\.id|post\\.title" app/src/components/Blogs.jsx -nRepository: OpenLake/Leaderboard-Pro
Length of output: 337
🏁 Script executed:
# Look for all discussionpost endpoint implementations in the entire codebase
rg "discussionpost" -t py -A 10 -B 2Repository: OpenLake/Leaderboard-Pro
Length of output: 1728
🏁 Script executed:
# Find the DiscussionPostManage view implementation
rg "class DiscussionPostManage" -A 50 -B 2 -t pyRepository: OpenLake/Leaderboard-Pro
Length of output: 3238
🏁 Script executed:
# Check the DiscussionPost model definition
rg "class DiscussionPost" -A 30 -B 2 -t pyRepository: OpenLake/Leaderboard-Pro
Length of output: 5637
🏁 Script executed:
# Look at the views.py file to understand how POST/PUT is handled
find . -name "views.py" -path "*/leaderboard/*" -exec cat -n {} \;Repository: OpenLake/Leaderboard-Pro
Length of output: 46373
🏁 Script executed:
# Check what fields the frontend is actually sending in the PUT request
cat -n app/src/components/Blogs.jsx | sed -n '85,100p'Repository: OpenLake/Leaderboard-Pro
Length of output: 581
🏁 Script executed:
# Verify if post.username is available in the frontend post object
rg "setBlogs|post\\.username|post\\[" app/src/components/Blogs.jsx -B 3 -A 3 | head -40Repository: OpenLake/Leaderboard-Pro
Length of output: 788
🏁 Script executed:
# Check if username is available in the post object or elsewhere
rg "username" app/src/components/Blogs.jsx -B 2 -A 2Repository: OpenLake/Leaderboard-Pro
Length of output: 787
🏁 Script executed:
# Check the GET endpoint to see what fields are returned in the post object
rg "fetchBlogs|fetch.*discussionpost" app/src/components/Blogs.jsx -A 5 -B 2Repository: OpenLake/Leaderboard-Pro
Length of output: 1522
🏁 Script executed:
# Check the full context of the like/dislike handler to see what post object contains
rg -A 30 "handleLikeDislike|handleInteraction" app/src/components/Blogs.jsx | head -50Repository: OpenLake/Leaderboard-Pro
Length of output: 50
🏁 Script executed:
# Find the function that contains the like/dislike update code
rg -B 20 "updatedLikes|updatedDislikes" app/src/components/Blogs.jsx | head -60Repository: OpenLake/Leaderboard-Pro
Length of output: 1246
🏁 Script executed:
# Check if currentUsername is available and used anywhere else
rg "currentUsername" app/src/components/Blogs.jsxRepository: OpenLake/Leaderboard-Pro
Length of output: 166
🏁 Script executed:
# Verify the exact error path in backend when username is missing
cat -n api/leaderboard/views.py | sed -n '542,562p'Repository: OpenLake/Leaderboard-Pro
Length of output: 1033
Add missing username field to the like/dislike update request.
The PUT request at line 91-95 is missing the username field required by the backend endpoint. The backend looks up posts using the combination of user and title (line 551 in api/leaderboard/views.py), and will fail with a KeyError when attempting to access request.data["username"]. Update the request body to include username: currentUsername.
Note: The title field is already guaranteed to be unique by the database constraint (unique=True in the model), so identification by title is stable. However, always include the required fields for your backend contract.
🛠️ Fix for the request body
body: JSON.stringify({
+ username: currentUsername,
title: post.title,
likes: updatedLikes,
dislikes: updatedDislikes,
}),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/Blogs.jsx` around lines 86 - 96, The PUT request body in
Blogs.jsx is missing the required username field; update the JSON payload in the
fetch call that currently sends title, likes, and dislikes to also include
username: currentUsername (use the existing currentUsername variable), so the
backend view that expects request.data["username"] can find the user; locate the
fetch block around the PUT to /discussionpost/ and add username: currentUsername
to the object being JSON.stringify'd.
| const calculateStreak = (data) => { | ||
| let streak = 0; | ||
| const flatData = data.flat(); | ||
| const newStreakCells = new Set(); | ||
|
|
||
| for (let i = flatData.length - 1; i >= 0; i--) { | ||
| // If the contribution level is greater than 0, increment streak | ||
| const level = typeof flatData[i] === 'object' ? flatData[i].level : flatData[i]; | ||
| if (level > 0) { | ||
| streak++; | ||
| newStreakCells.add(i); | ||
| } else { | ||
| if (i === flatData.length - 1) { | ||
| continue; | ||
| } else { | ||
| break; | ||
| } | ||
| } | ||
| } | ||
| setCurrentStreak(streak); | ||
| setStreakCells(newStreakCells); | ||
| if (updateStreak && platform) { | ||
| updateStreak(platform, streak); | ||
| } | ||
| }; |
There was a problem hiding this comment.
calculateStreak skips only one trailing zero, but Codeforces data can have up to six future-padded zeros at the tail.
In processCodeforcesData (lines 127–130), up to 6 future-date cells (level: 0) are appended to complete the last partial week. calculateStreak only skips the very last element when its level is 0 (i === flatData.length - 1 → continue). The next element with level: 0 immediately hits break, so any streak from yesterday or earlier is reported as 0.
🐛 Proposed fix — skip all trailing zeros before the first active day
- for (let i = flatData.length - 1; i >= 0; i--) {
- const level = typeof flatData[i] === 'object' ? flatData[i].level : flatData[i];
- if (level > 0) {
- streak++;
- newStreakCells.add(i);
- } else {
- if (i === flatData.length - 1) {
- continue;
- } else {
- break;
- }
- }
- }
+ // Skip all trailing zeros (future/empty days) before counting the streak
+ let i = flatData.length - 1;
+ while (i >= 0) {
+ const level = typeof flatData[i] === 'object' ? flatData[i].level : flatData[i];
+ if (level === 0) { i--; continue; } // skip leading empties
+ break;
+ }
+ // Now count contiguous non-zero days
+ for (; i >= 0; i--) {
+ const level = typeof flatData[i] === 'object' ? flatData[i].level : flatData[i];
+ if (level > 0) {
+ streak++;
+ newStreakCells.add(i);
+ } else {
+ break;
+ }
+ }Note: The original intent appears to allow "today with no activity yet" as a valid streak day (by skipping one zero). If that semantic is required, replace the blanket leading-zero skip with a date-aware check that only skips cells whose date ≥ today.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/Heatmap.jsx` around lines 216 - 240, calculateStreak only
skips a single trailing zero so padded future zeros from processCodeforcesData
cause real streaks to be cut; change calculateStreak to first walk backward over
all trailing zero-level cells (the future padding) to find the last candidate
index, then from that index iterate backwards counting consecutive cells with
level > 0 and adding their indices to newStreakCells before calling
setCurrentStreak and setStreakCells (keep updateStreak(platform, streak)
behavior). If you need the original "skip only today" semantics instead, replace
the blanket trailing-zero skip with a date-aware check that only skips cells
whose date >= today.
| for (let i = 0; i < keys.length; i++) { | ||
| const timestamp = parseInt(keys[i], 10) * 1000; | ||
| const subdate = new Date(timestamp); | ||
|
|
||
| // Just simple daily streak counting logic | ||
| // In a real scenario, you'd accurately calculate contiguous days from today or yesterday | ||
| if (streak === 0) { | ||
| const dayDiff = Math.floor((now - subdate) / (1000 * 60 * 60 * 24)); | ||
| if (dayDiff <= 1) { // 0 or 1 day ago | ||
| streak = 1; | ||
| } else { | ||
| break; | ||
| } | ||
| } else { | ||
| // difference from previous key | ||
| const prevTimestamp = parseInt(keys[i-1], 10) * 1000; | ||
| const expectedDiff = (prevTimestamp - timestamp) / (1000 * 60 * 60 * 24); | ||
| // Due to DST / timezone shifts, it might be roughly 1. 0.9 to 1.1 | ||
| if (expectedDiff <= 1.5) { | ||
| streak++; | ||
| } else { | ||
| break; | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
LeetCode streak timestamps are not normalized to UTC midnight, making day-diff calculations timezone-dependent and unreliable.
LeetCode submissionCalendar keys are epoch-second timestamps that may represent arbitrary submission times (not UTC midnight). Computing (prevTimestamp - timestamp) / (1000 * 60 * 60 * 24) between two same-day submission timestamps yields a value < 1, which would falsely break the streak loop.
The fix is to normalize each timestamp to its local/UTC midnight before comparing:
const toDay = (epochMs) => {
const d = new Date(epochMs);
d.setHours(0, 0, 0, 0);
return d.getTime();
};
const uniqueDays = [...new Set(keys.map(k => toDay(parseInt(k, 10) * 1000)))]
.sort((a, b) => b - a);
// Then run the streak loop over uniqueDaysThe expectedDiff <= 1.5 threshold on line 58 was intended to paper over this but is too coarse — it allows a 36-hour gap to count as consecutive.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/PlatformStreakFetcher.jsx` around lines 40 - 64, The loop
in PlatformStreakFetcher.jsx uses raw epoch timestamps (keys) and a fuzzy
threshold (expectedDiff <= 1.5), which breaks when multiple submissions occur
the same day; normalize each timestamp to UTC/local midnight, deduplicate to
uniqueDays, sort descending (e.g., uniqueDays), then iterate uniqueDays instead
of keys and compute integer day differences using MS_PER_DAY (24*60*60*1000) so
you count same-day once and only treat consecutive days when (prevDay - day) /
MS_PER_DAY === 1; remove the 1.5 threshold and use exact integer day comparisons
(reference variables/functions: keys, streak, expectedDiff, prevTimestamp — add
helper toDay/uniqueDays).
| const fetchAtcoderStreak = async () => { | ||
| try { | ||
| const response = await fetch(`https://kenkoooo.com/atcoder/atcoder-api/v3/user/submissions?user=${username}&from_second=0`); | ||
| const submissions = await response.json(); | ||
|
|
||
| if (submissions && submissions.length > 0) { | ||
| // Submissions has epoch_second | ||
| const daysWithSubmissions = new Set(); | ||
|
|
||
| submissions.forEach(sub => { | ||
| // Kenkoooo AtCoder Problems API usually includes a result/verdict. Optional but good. | ||
| // Usually AC means accepted. We'll just count any submission for activity, | ||
| // or check sub.result === "AC". We will count any submission to match Codeforces logic. | ||
| const subdate = new Date(sub.epoch_second * 1000); | ||
| subdate.setHours(0, 0, 0, 0); | ||
| daysWithSubmissions.add(subdate.getTime()); | ||
| }); | ||
|
|
||
| // Sort descending | ||
| const sortedDays = Array.from(daysWithSubmissions).sort((a, b) => b - a); | ||
|
|
||
| let streak = 0; | ||
| const now = new Date(); | ||
| now.setHours(0, 0, 0, 0); | ||
|
|
||
| for (let i = 0; i < sortedDays.length; i++) { | ||
| const dayTime = sortedDays[i]; | ||
|
|
||
| if (streak === 0) { | ||
| const dayDiff = Math.floor((now.getTime() - dayTime) / (1000 * 60 * 60 * 24)); | ||
| if (dayDiff <= 1) { | ||
| streak = 1; | ||
| } else { | ||
| break; | ||
| } | ||
| } else { | ||
| const prevTime = sortedDays[i-1]; | ||
| const expectedDiff = (prevTime - dayTime) / (1000 * 60 * 60 * 24); | ||
| if (expectedDiff <= 1.5) { | ||
| streak++; | ||
| } else { | ||
| break; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| updateStreak(platform, streak); | ||
| } else { | ||
| updateStreak(platform, 0); | ||
| } | ||
| } catch (error) { | ||
| console.error("Error fetching AtCoder streak:", error); | ||
| updateStreak(platform, 0); | ||
| } |
There was a problem hiding this comment.
AtCoder: from_second=0 fetches the user's entire submission history with no limit — can be MB of data for active users.
The endpoint kenkoooo.com/atcoder-api/v3/user/submissions?user=...&from_second=0 returns all submissions from the Unix epoch. A user with years of AtCoder history can accumulate thousands of submissions. Only the most recent ~90 days of data are needed to compute a streak.
Consider using a from_second value relative to ~90 days ago:
const fromSecond = Math.floor(Date.now() / 1000) - 90 * 24 * 60 * 60;
const response = await fetch(
`https://kenkoooo.com/atcoder/atcoder-api/v3/user/submissions?user=${username}&from_second=${fromSecond}`
);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/PlatformStreakFetcher.jsx` around lines 76 - 129,
fetchAtcoderStreak is requesting the user's entire AtCoder history via
from_second=0 which can return MBs of data; change the fetch to set from_second
to roughly 90 days ago (compute fromSecond using Date.now()/1000 minus
90*24*60*60) so the request only returns recent submissions, keep the rest of
the logic (parsing epoch_second, creating daysWithSubmissions, computing streak
and calling updateStreak) unchanged and ensure you use the computed fromSecond
in the URL string for the fetch call.
| import * as React from "react" | ||
| import { Progress as ProgressPrimitive } from "radix-ui" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check declared Radix Progress dependency in package.json
echo "=== Checking package.json for Radix dependencies ==="
cat package.json | grep -A5 -B5 "@radix-ui\|radix-ui" || echo "No direct grep match, checking full content..."
echo ""
echo "=== Full package.json content ==="
cat package.jsonRepository: OpenLake/Leaderboard-Pro
Length of output: 1530
🏁 Script executed:
#!/bin/bash
# Check existing imports for radix-ui in the codebase
echo "=== Searching for radix-ui imports ==="
rg -n 'from\s+["\'](`@radix-ui`|radix-ui)' app/src -g '*.js' -g '*.jsx' -g '*.ts' -g '*.tsx' --no-headingRepository: OpenLake/Leaderboard-Pro
Length of output: 314
🏁 Script executed:
#!/bin/bash
# Search for radix-ui imports with proper escaping
echo "=== Searching for radix-ui imports ==="
rg 'from.*radix-ui' app/src -g '*.js' -g '*.jsx' -g '*.ts' -g '*.tsx'
echo ""
echo "=== Searching entire codebase for radix ==="
rg 'radix' . --type js --type jsx --type ts --type tsx 2>/dev/null | head -20Repository: OpenLake/Leaderboard-Pro
Length of output: 1414
🏁 Script executed:
#!/bin/bash
# Verify the progress.jsx file exists and show its content
if [ -f "app/src/components/ui/progress.jsx" ]; then
echo "=== File exists, showing content ==="
cat -n app/src/components/ui/progress.jsx
else
echo "File not found at app/src/components/ui/progress.jsx"
echo "=== Searching for progress files ==="
find app/src -name '*progress*' -type f
fiRepository: OpenLake/Leaderboard-Pro
Length of output: 951
Add missing @radix-ui/react-progress dependency and fix import path.
Line 2 imports from "radix-ui", but this package is not installed and not the correct import path. All other Radix components in the codebase (sheet, switch, tooltip, popover, tabs, etc.) import from scoped packages like @radix-ui/react-<module>. The @radix-ui/react-progress package is missing from package.json dependencies and must be added. Update the import to match the established pattern.
🛠️ Required fixes
- Add to
package.jsondependencies:
"@radix-ui/react-progress": "^1.1.1"- Fix the import in
progress.jsx:
-import { Progress as ProgressPrimitive } from "radix-ui"
+import * as ProgressPrimitive from "@radix-ui/react-progress"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import * as React from "react" | |
| import { Progress as ProgressPrimitive } from "radix-ui" | |
| import * as React from "react" | |
| import * as ProgressPrimitive from "@radix-ui/react-progress" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/ui/progress.jsx` around lines 1 - 2, Import path for the
Progress component is incorrect and the `@radix-ui/react-progress` dependency is
missing; add "@radix-ui/react-progress": "^1.1.1" to package.json dependencies
and update the import in app/src/components/ui/progress.jsx to import Progress
as ProgressPrimitive from "@radix-ui/react-progress" (so the file uses the same
ProgressPrimitive identifier as the rest of the code).
| function Progress({ | ||
| className, | ||
| value, | ||
| ...props | ||
| }) { | ||
| return ( | ||
| <ProgressPrimitive.Root | ||
| data-slot="progress" | ||
| className={cn( | ||
| "bg-primary/20 relative h-2 w-full overflow-hidden rounded-full", | ||
| className | ||
| )} | ||
| {...props}> | ||
| <ProgressPrimitive.Indicator | ||
| data-slot="progress-indicator" | ||
| className="bg-primary h-full w-full flex-1 transition-all" | ||
| style={{ transform: `translateX(-${100 - (value || 0)}%)` }} /> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "progress.jsx" | head -20Repository: OpenLake/Leaderboard-Pro
Length of output: 104
🏁 Script executed:
cat -n app/src/components/ui/progress.jsxRepository: OpenLake/Leaderboard-Pro
Length of output: 914
🏁 Script executed:
rg -n "AchievementCard" --type jsx --type js -A 5 -B 5Repository: OpenLake/Leaderboard-Pro
Length of output: 95
🏁 Script executed:
rg -n "indicatorClassName" --type jsx --type jsRepository: OpenLake/Leaderboard-Pro
Length of output: 95
🌐 Web query:
Radix UI Progress Root component value prop ARIA accessibility
💡 Result:
In Radix UI Progress (@radix-ui/react-progress), the Progress.Root value prop is what drives the ARIA “current value” for the progressbar:
value: number→ renders a determinate progressbar and setsaria-valuenowto that number (and should be kept updated as progress changes). [1][3]value: null(or omitvalue) → renders an indeterminate progressbar and must not includearia-valuenow. [1][3]max→ setsaria-valuemax(Radix uses0as the minimum, i.e.aria-valuemin="0"). [1][2]getValueLabel(value, max)→ customizes the accessible value text (i.e.aria-valuetext) rather than the default percentage-style announcement. [1]
Also ensure the progressbar has an accessible name (e.g. aria-label or aria-labelledby) on Progress.Root. [2]
Example
<Progress.Root
value={progress} // number, or null/undefined for indeterminate
max={100}
aria-label="Upload progress"
getValueLabel={(value, max) => `${value} of ${max} files`}
>
<Progress.Indicator />
</Progress.Root>Sources: Radix Progress docs [1], MDN progressbar role requirements [2], MDN aria-valuenow guidance (omit when indeterminate) [3].
🏁 Script executed:
rg -i "achievementcard" --type js --type tsRepository: OpenLake/Leaderboard-Pro
Length of output: 297
🏁 Script executed:
find . -type f -name "*achievement*" -o -name "*Achievement*" | head -20Repository: OpenLake/Leaderboard-Pro
Length of output: 178
🏁 Script executed:
cat -n app/src/components/AchievementCard.jsxRepository: OpenLake/Leaderboard-Pro
Length of output: 5023
Forward value to Root and accept indicatorClassName for indicator styling.
Lines 6-22: value is extracted but never passed to ProgressPrimitive.Root, preventing Radix UI from setting aria-valuenow and rendering a determinate progressbar for accessibility. Additionally, indicatorClassName passed by AchievementCard (line 87 in AchievementCard.jsx) is ignored, so tier colors (Bronze/Silver/Gold/Platinum) fail to apply to the progress indicator.
🛠️ Proposed fix
-function Progress({
- className,
- value,
- ...props
-}) {
+function Progress({
+ className,
+ value = 0,
+ indicatorClassName,
+ ...props
+}) {
return (
<ProgressPrimitive.Root
data-slot="progress"
className={cn(
"bg-primary/20 relative h-2 w-full overflow-hidden rounded-full",
className
)}
- {...props}>
+ value={value}
+ {...props}>
<ProgressPrimitive.Indicator
data-slot="progress-indicator"
- className="bg-primary h-full w-full flex-1 transition-all"
- style={{ transform: `translateX(-${100 - (value || 0)}%)` }} />
+ className={cn("bg-primary h-full w-full flex-1 transition-all", indicatorClassName)}
+ style={{ transform: `translateX(-${100 - value}%)` }} />
</ProgressPrimitive.Root>
);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/ui/progress.jsx` around lines 6 - 22, The Progress
component currently extracts value and ignores indicatorClassName; forward the
value prop into ProgressPrimitive.Root (so Radix can set aria-valuenow and
render determinate progress) and accept an indicatorClassName prop in the
Progress signature, then merge that into the ProgressPrimitive.Indicator
className (preserving the existing "bg-primary h-full w-full flex-1
transition-all" classes via cn). Update the function signature (Progress) to
include indicatorClassName and forward value to ProgressPrimitive.Root while
keeping className forwarded to the root so AchievementCard's tier colors apply
correctly.
| cf_gladiator: { | ||
| title: "The Gladiator", | ||
| description: "Participate in rated contests on Codeforces.", | ||
| icon: Sword, | ||
| color: "text-red-500", | ||
| bg: "bg-red-500/10", | ||
| tiers: [ | ||
| { name: "Bronze", requirement: 5, label: "5 Contests" }, | ||
| { name: "Silver", requirement: 10, label: "10 Contests" }, | ||
| { name: "Gold", requirement: 25, label: "25 Contests" }, | ||
| { name: "Platinum", requirement: 50, label: "50 Contests" } | ||
| ], | ||
| evaluate: (stats) => { | ||
| // We do not have exactly "number of contests" in user model, | ||
| // but we have `rating_updates` list or similar if we fetch it. | ||
| // For now, let's tie this to total_solved on Codeforces or Codechef | ||
| return stats?.codeforces?.total_solved || 0; | ||
| } | ||
| }, // We will skip the Gladiator or map it to total solved | ||
| grand_ascent: { | ||
| title: "The Grand Ascent", | ||
| description: "Reach elite competitive programming ratings.", | ||
| icon: Trophy, | ||
| color: "text-amber-600", | ||
| bg: "bg-amber-600/10", | ||
| tiers: [ | ||
| { name: "Bronze", requirement: 1200, label: "1200 Rating" }, | ||
| { name: "Silver", requirement: 1400, label: "1400 Rating" }, | ||
| { name: "Gold", requirement: 1600, label: "1600 Rating" }, | ||
| { name: "Platinum", requirement: 1900, label: "1900 Rating" } | ||
| ], | ||
| evaluate: (stats) => Math.max(stats?.codeforces?.rating || 0, stats?.codechef?.rating || 0) | ||
| }, | ||
| agnostic: { | ||
| title: "Platform Agnostic", | ||
| description: "Hold an active, rated rank on multiple platforms.", | ||
| icon: Target, | ||
| color: "text-purple-500", | ||
| bg: "bg-purple-500/10", | ||
| tiers: [ | ||
| { name: "Bronze", requirement: 1, label: "1 Platform" }, | ||
| { name: "Silver", requirement: 2, label: "2 Platforms" }, | ||
| { name: "Gold", requirement: 3, label: "All 3 Platforms (CF, CC, AC)" } | ||
| ], | ||
| evaluate: (stats) => { | ||
| let count = 0; | ||
| if (stats?.codeforces?.rating > 0) count++; | ||
| if (stats?.codechef?.rating > 0) count++; | ||
| if (stats?.atcoder?.rating > 0) count++; | ||
| return count; | ||
| } | ||
| }, |
There was a problem hiding this comment.
Align “The Gladiator” description with the metric used.
Line 50-66 describes rated contests, but evaluate uses total_solved. This will unlock based on solved count rather than contests, which is misleading. Either change the description or compute contest count.
🛠️ Possible text-only fix
- description: "Participate in rated contests on Codeforces.",
+ description: "Solve problems on Codeforces (proxy for contest participation).",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cf_gladiator: { | |
| title: "The Gladiator", | |
| description: "Participate in rated contests on Codeforces.", | |
| icon: Sword, | |
| color: "text-red-500", | |
| bg: "bg-red-500/10", | |
| tiers: [ | |
| { name: "Bronze", requirement: 5, label: "5 Contests" }, | |
| { name: "Silver", requirement: 10, label: "10 Contests" }, | |
| { name: "Gold", requirement: 25, label: "25 Contests" }, | |
| { name: "Platinum", requirement: 50, label: "50 Contests" } | |
| ], | |
| evaluate: (stats) => { | |
| // We do not have exactly "number of contests" in user model, | |
| // but we have `rating_updates` list or similar if we fetch it. | |
| // For now, let's tie this to total_solved on Codeforces or Codechef | |
| return stats?.codeforces?.total_solved || 0; | |
| } | |
| }, // We will skip the Gladiator or map it to total solved | |
| grand_ascent: { | |
| title: "The Grand Ascent", | |
| description: "Reach elite competitive programming ratings.", | |
| icon: Trophy, | |
| color: "text-amber-600", | |
| bg: "bg-amber-600/10", | |
| tiers: [ | |
| { name: "Bronze", requirement: 1200, label: "1200 Rating" }, | |
| { name: "Silver", requirement: 1400, label: "1400 Rating" }, | |
| { name: "Gold", requirement: 1600, label: "1600 Rating" }, | |
| { name: "Platinum", requirement: 1900, label: "1900 Rating" } | |
| ], | |
| evaluate: (stats) => Math.max(stats?.codeforces?.rating || 0, stats?.codechef?.rating || 0) | |
| }, | |
| agnostic: { | |
| title: "Platform Agnostic", | |
| description: "Hold an active, rated rank on multiple platforms.", | |
| icon: Target, | |
| color: "text-purple-500", | |
| bg: "bg-purple-500/10", | |
| tiers: [ | |
| { name: "Bronze", requirement: 1, label: "1 Platform" }, | |
| { name: "Silver", requirement: 2, label: "2 Platforms" }, | |
| { name: "Gold", requirement: 3, label: "All 3 Platforms (CF, CC, AC)" } | |
| ], | |
| evaluate: (stats) => { | |
| let count = 0; | |
| if (stats?.codeforces?.rating > 0) count++; | |
| if (stats?.codechef?.rating > 0) count++; | |
| if (stats?.atcoder?.rating > 0) count++; | |
| return count; | |
| } | |
| }, | |
| cf_gladiator: { | |
| title: "The Gladiator", | |
| description: "Solve problems on Codeforces (proxy for contest participation).", | |
| icon: Sword, | |
| color: "text-red-500", | |
| bg: "bg-red-500/10", | |
| tiers: [ | |
| { name: "Bronze", requirement: 5, label: "5 Contests" }, | |
| { name: "Silver", requirement: 10, label: "10 Contests" }, | |
| { name: "Gold", requirement: 25, label: "25 Contests" }, | |
| { name: "Platinum", requirement: 50, label: "50 Contests" } | |
| ], | |
| evaluate: (stats) => { | |
| // We do not have exactly "number of contests" in user model, | |
| // but we have `rating_updates` list or similar if we fetch it. | |
| // For now, let's tie this to total_solved on Codeforces or Codechef | |
| return stats?.codeforces?.total_solved || 0; | |
| } | |
| }, // We will skip the Gladiator or map it to total solved | |
| grand_ascent: { | |
| title: "The Grand Ascent", | |
| description: "Reach elite competitive programming ratings.", | |
| icon: Trophy, | |
| color: "text-amber-600", | |
| bg: "bg-amber-600/10", | |
| tiers: [ | |
| { name: "Bronze", requirement: 1200, label: "1200 Rating" }, | |
| { name: "Silver", requirement: 1400, label: "1400 Rating" }, | |
| { name: "Gold", requirement: 1600, label: "1600 Rating" }, | |
| { name: "Platinum", requirement: 1900, label: "1900 Rating" } | |
| ], | |
| evaluate: (stats) => Math.max(stats?.codeforces?.rating || 0, stats?.codechef?.rating || 0) | |
| }, | |
| agnostic: { | |
| title: "Platform Agnostic", | |
| description: "Hold an active, rated rank on multiple platforms.", | |
| icon: Target, | |
| color: "text-purple-500", | |
| bg: "bg-purple-500/10", | |
| tiers: [ | |
| { name: "Bronze", requirement: 1, label: "1 Platform" }, | |
| { name: "Silver", requirement: 2, label: "2 Platforms" }, | |
| { name: "Gold", requirement: 3, label: "All 3 Platforms (CF, CC, AC)" } | |
| ], | |
| evaluate: (stats) => { | |
| let count = 0; | |
| if (stats?.codeforces?.rating > 0) count++; | |
| if (stats?.codechef?.rating > 0) count++; | |
| if (stats?.atcoder?.rating > 0) count++; | |
| return count; | |
| } | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/utils/achievements.js` around lines 49 - 100, The cf_gladiator
achievement description promises "Participate in rated contests" but its
evaluate function (cf_gladiator.evaluate) currently returns
stats?.codeforces?.total_solved, so it unlocks by problems solved instead of
contests; fix by either updating the description to reference "total solved" or
(preferred) change cf_gladiator.evaluate to compute contest count from available
contest history fields such as stats?.codeforces?.rating_updates?.length or
stats?.codechef?.contests?.length (falling back to 0), and use that count to
compare against the tiers so the metric matches the description.
Blog System Updates
Frontend UI (Blogs.jsx): Built a robust React component for the /blogs route that fetches, displays, and formats discussion posts. Includes a dedicated "Create Blog" form to POST new content cleanly.
Like/Dislike Interactions: Wired up the UI so users can actively click to Like or Dislike posts, which automatically sends PUT requests to the backend.
Fixed :
DiscussionPostManage
Put and Delete backend methods. Resolved a foreign-key referencing issue, ensuring any authenticated user can successfully increment/decrement likes on a post.
Navigation Integration.
Navbar.jsx, App.jsx: Brought the Blog system out of hiding by adding a BookOpen icon to the Sidebar and establishing the protected /blogs React router path.
Achievements Engine (Gamification)
Database Models (
models.py: Designed and added the
AchievementUnlock: Django model to securely track which users unlocked which achievement (slug, tier, earned_at).API Endpoints.
urls.py, views.py, serializers.py: Implemented REST framework APIs (GET /achievements/ and POST /achievements/unlock/) enabling the frontend to securely calculate and log newly reached milestones.
Logic & Evaluation .
achievements.js : Built an extensive achievement dictionary defining the thresholds, logic, and evaluation functions for 4 core categories matching our database metrics:
Open Source & GitHub (The Contributor, Star Magnet, The Architect)
Competitive Programming (The Gladiator, The Grand Ascent, Platform Agnostic)
LeetCode & DSA (The Grindset, Hardboiled, Medium Master)
Community & Streaks (Unstoppable Force)
Dynamic UI (
AchievementCard.jsx , Achievements.jsx : Built a shiny ShadCN-powered card interface utilizing lucide-react icons. Features responsive progression bars and glowing badges that dynamically change styling based on the highest tier reached (Bronze, Silver, Gold, Platinum) alongside visually formatted "Earned on" dates.
Cross-Platform Streak Fetchers: Added
PlatformStreakFetcher.jsx to dynamically fetch unified daily streaks from LeetCode (alfa-leetcode-api) and AtCoder (kenkoooo) without needing a Heatmap component, enabling the Unstoppable Force achievement. The Global Streak badge is always displayed in the NavMenu, even at 0.
Migration Fixer: Removed a volatile datetime.now() execution from codeforcesUser.last_activity's default field that effectively caused an infinite Django migration loop, allowing
Summary by CodeRabbit
New Features
Updates