Add dashboard channel trend ranking chart#3395
Add dashboard channel trend ranking chart#3395MisonL wants to merge 4 commits intoQuantumNous:mainfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds channel-level hourly quota aggregation (model + tests), controller endpoints and date-range validation for channel quota dates, router routes, and frontend support: a new "Channel Trend Ranking" tab, data loader, chart spec, and client-side aggregation/visualization logic. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Frontend
participant Controller
participant Model
participant Database
User->>Frontend: select "Channel Trend Ranking" tab
Frontend->>Frontend: handleChartTabChange('5') / loadChannelTrendChart()
Frontend->>Controller: GET /api/data/channels or /api/data/self/channels (start,end,username)
Controller->>Controller: parseDashboardRange (validate start/end, enforce max span)
Controller->>Model: GetAllChannelQuotaData / GetChannelQuotaDataByUserId / GetChannelQuotaDataByUsername
Model->>Database: query logs WHERE type=Consume AND time BETWEEN start..end GROUP BY channel_id, hour_bucket
Database-->>Model: aggregated rows
Model->>Model: attachChannelNames (cache -> channels table)
Model-->>Controller: channel quota data JSON
Controller-->>Frontend: { success:true/false, message, data }
Frontend->>Frontend: updateChannelTrendData(data) -> spec_channel_rank_trend
Frontend-->>User: render stacked bar chart
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
model/usedata.go (1)
180-191: Optional: query only missing channel IDs after cache hits.Right now the DB lookup can re-fetch IDs already resolved from memory cache. Restricting the
INlist to unresolved IDs will reduce unnecessary DB load on hot dashboards.♻️ Suggested optimization
- if len(channelNames) < len(ids) { + if len(channelNames) < len(ids) { + missingIDs := make([]int, 0, len(ids)-len(channelNames)) + for _, id := range ids { + if _, ok := channelNames[id]; !ok { + missingIDs = append(missingIDs, id) + } + } + if len(missingIDs) == 0 { + goto assign_names + } type channelRow struct { Id int `gorm:"column:id"` Name string `gorm:"column:name"` } rows := make([]channelRow, 0, len(ids)) - if err := DB.Table("channels").Select("id, name").Where("id IN ?", ids).Find(&rows).Error; err != nil { + if err := DB.Table("channels").Select("id, name").Where("id IN ?", missingIDs).Find(&rows).Error; err != nil { return err } for _, row := range rows { channelNames[row.Id] = row.Name } } +assign_names: for _, item := range channelData {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@model/usedata.go` around lines 180 - 191, The DB query fetches all IDs in ids even if some names are already present in channelNames; modify the logic in the block that defines channelRow and performs DB.Table("channels").Select(...).Where("id IN ?", ids).Find(&rows) to instead build a slice of only missing IDs (e.g., compute missingIDs by iterating ids and checking channelNames[id] presence) and use that slice in the WHERE IN query so only unresolved channel IDs are fetched and then map results back into channelNames.web/src/hooks/dashboard/useDashboardCharts.jsx (1)
498-523: Please confirmchannel_nameis unique before grouping on it.Lines 499-501 use the display name as the series key, so duplicate channel names would be merged into one trend/rank entry. If names are not globally unique, group by
channel_idand keep the name only as the label.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web/src/hooks/dashboard/useDashboardCharts.jsx` around lines 498 - 523, The code currently uses channel_name as the series key (in normalizedData → model_name) which can merge distinct channels that share a display name; change the grouping key to channel_id (use channel_id as the unique identifier when aggregating in aggregateDataByTimeAndModel and when building channelTotals via updateMapValue) and keep channel_name only as a label for display (e.g., attach name as a separate property on normalizedData items). Ensure topChannels/topChannelSet and the filteredData logic use the channel_id key while any UI rendering uses the channel_name label.model/usedata_channel_test.go (1)
10-112: Please add a non-SQLite check for this aggregation path.This locks in one backend, but the new hour-bucketing logic is exactly the kind of query that tends to diverge across MySQL/PostgreSQL. A small dialect-matrix test or targeted verification would make this a lot safer before release.
As per coding guidelines "All database code MUST be fully compatible with SQLite, MySQL >= 5.7.8, and PostgreSQL >= 9.6 simultaneously."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@model/usedata_channel_test.go` around lines 10 - 112, The test currently only verifies aggregation against SQLite; add a non-SQLite branch to exercise the same aggregation path on other dialects by detecting the active dialect (e.g., inspect DB.Dialector.Name() or your project's common.DB/dialect helper) inside TestGetAllChannelQuotaData and, when the dialect is not "sqlite", run the same setup and assertions (or duplicate assertions) against GetAllChannelQuotaData and GetChannelQuotaDataByUserId to validate hour-bucketing on MySQL/Postgres; reference the existing test function TestGetAllChannelQuotaData and the functions GetAllChannelQuotaData/GetChannelQuotaDataByUserId when adding the conditional non-sqlite verification.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@controller/usedata.go`:
- Around line 77-84: The code ignores strconv.ParseInt errors for
start_timestamp and end_timestamp causing invalid inputs (e.g. "foo") to default
to 0 and bypass the 30-day guard; update the handler in usedata.go to check the
errors returned from strconv.ParseInt when parsing c.Query("start_timestamp")
and c.Query("end_timestamp") (the startTimestamp and endTimestamp variables) and
immediately return a JSON error (same shape: success:false and message
describing malformed timestamp) with http.StatusBadRequest if either parse
fails, before applying the endTimestamp-startTimestamp > 2592000 check.
---
Nitpick comments:
In `@model/usedata_channel_test.go`:
- Around line 10-112: The test currently only verifies aggregation against
SQLite; add a non-SQLite branch to exercise the same aggregation path on other
dialects by detecting the active dialect (e.g., inspect DB.Dialector.Name() or
your project's common.DB/dialect helper) inside TestGetAllChannelQuotaData and,
when the dialect is not "sqlite", run the same setup and assertions (or
duplicate assertions) against GetAllChannelQuotaData and
GetChannelQuotaDataByUserId to validate hour-bucketing on MySQL/Postgres;
reference the existing test function TestGetAllChannelQuotaData and the
functions GetAllChannelQuotaData/GetChannelQuotaDataByUserId when adding the
conditional non-sqlite verification.
In `@model/usedata.go`:
- Around line 180-191: The DB query fetches all IDs in ids even if some names
are already present in channelNames; modify the logic in the block that defines
channelRow and performs DB.Table("channels").Select(...).Where("id IN ?",
ids).Find(&rows) to instead build a slice of only missing IDs (e.g., compute
missingIDs by iterating ids and checking channelNames[id] presence) and use that
slice in the WHERE IN query so only unresolved channel IDs are fetched and then
map results back into channelNames.
In `@web/src/hooks/dashboard/useDashboardCharts.jsx`:
- Around line 498-523: The code currently uses channel_name as the series key
(in normalizedData → model_name) which can merge distinct channels that share a
display name; change the grouping key to channel_id (use channel_id as the
unique identifier when aggregating in aggregateDataByTimeAndModel and when
building channelTotals via updateMapValue) and keep channel_name only as a label
for display (e.g., attach name as a separate property on normalizedData items).
Ensure topChannels/topChannelSet and the filteredData logic use the channel_id
key while any UI rendering uses the channel_name label.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 54d518a9-2330-4692-b881-7e68811c9315
📒 Files selected for processing (15)
controller/usedata.gomodel/usedata.gomodel/usedata_channel_test.gorouter/api-router.goweb/src/components/dashboard/ChartsPanel.jsxweb/src/components/dashboard/index.jsxweb/src/hooks/dashboard/useDashboardCharts.jsxweb/src/hooks/dashboard/useDashboardData.jsweb/src/i18n/locales/en.jsonweb/src/i18n/locales/fr.jsonweb/src/i18n/locales/ja.jsonweb/src/i18n/locales/ru.jsonweb/src/i18n/locales/vi.jsonweb/src/i18n/locales/zh-CN.jsonweb/src/i18n/locales/zh-TW.json
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
♻️ Duplicate comments (1)
controller/usedata.go (1)
13-33:⚠️ Potential issue | 🟡 MinorValidate timestamp ordering in
parseDashboardRange.
start_timestampandend_timestampare parse-validated, but reversed ranges are still accepted. Rejectend_timestamp < start_timestampearly to keep range semantics consistent and avoid silent empty-result queries.Suggested fix
func parseDashboardRange(c *gin.Context) (int64, int64, bool) { startTimestamp, err := strconv.ParseInt(c.Query("start_timestamp"), 10, 64) if err != nil { c.JSON(http.StatusBadRequest, gin.H{ "success": false, "message": "start_timestamp 格式错误", }) return 0, 0, false } endTimestamp, err := strconv.ParseInt(c.Query("end_timestamp"), 10, 64) if err != nil { c.JSON(http.StatusBadRequest, gin.H{ "success": false, "message": "end_timestamp 格式错误", }) return 0, 0, false } + if endTimestamp < startTimestamp { + c.JSON(http.StatusBadRequest, gin.H{ + "success": false, + "message": "时间范围不合法", + }) + return 0, 0, false + } return startTimestamp, endTimestamp, true }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controller/usedata.go` around lines 13 - 33, In parseDashboardRange, after parsing startTimestamp and endTimestamp, validate that endTimestamp >= startTimestamp and return a 400 response if not; update the function (parseDashboardRange) to check the ordering of the parsed startTimestamp and endTimestamp, call c.JSON with success:false and a clear message like "end_timestamp must be >= start_timestamp", and return 0, 0, false to abort processing when the range is reversed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@controller/usedata.go`:
- Around line 13-33: In parseDashboardRange, after parsing startTimestamp and
endTimestamp, validate that endTimestamp >= startTimestamp and return a 400
response if not; update the function (parseDashboardRange) to check the ordering
of the parsed startTimestamp and endTimestamp, call c.JSON with success:false
and a clear message like "end_timestamp must be >= start_timestamp", and return
0, 0, false to abort processing when the range is reversed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 6e59c322-179f-4336-880c-fcdaaf9485c3
📒 Files selected for processing (4)
controller/usedata.gomodel/usedata.gomodel/usedata_channel_test.goweb/src/hooks/dashboard/useDashboardCharts.jsx
✅ Files skipped from review due to trivial changes (1)
- model/usedata_channel_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
- model/usedata.go
- web/src/hooks/dashboard/useDashboardCharts.jsx
There was a problem hiding this comment.
🧹 Nitpick comments (2)
controller/usedata.go (2)
91-97: Consider a safety rail on the admin channel aggregation endpoint.Line 97 delegates to
model.GetAllChannelQuotaData, which reads fromlogsrather than the pre-aggregated quota table. The self variant already needs a 30-day cap at Lines 116-121, so this admin path is now the only unbounded log-backed dashboard query. A generous server-side max span or a pre-aggregated path would reduce the blast radius of large full-history requests.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controller/usedata.go` around lines 91 - 97, GetAllChannelQuotaDates currently calls model.GetAllChannelQuotaData which scans raw logs with no time-span limit; add a server-side safety rail by enforcing a maximum allowed range (e.g., 30 or 90 days) on startTimestamp/endTimestamp before calling GetAllChannelQuotaData, or route admin requests to the pre-aggregated quota table method instead; modify the GetAllChannelQuotaDates handler to validate and clamp the requested range and return a 400 or adjusted range if the requested span exceeds the configured MAX_QUOTA_SPAN, referencing the parseDashboardRange, GetAllChannelQuotaData, and any pre-aggregated quota retrieval function to locate where to implement the change.
63-76: Fold the 30-day self-view guard into shared validation too.Lines 70-75 and 116-121 now duplicate the same limit and response body. Pulling that into
parseDashboardRange(or a tiny follow-up validator) will keep the two self endpoints from drifting on future message/status/max-span changes.Also applies to: 109-121
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controller/usedata.go` around lines 63 - 76, The 30-day span check is duplicated in GetUserQuotaDates and the other self endpoint; move that validation into parseDashboardRange (or create a small shared validateDashboardRange function) so both callers get the same behavior and response format; update parseDashboardRange to return the same JSON response (success:false/message:"时间跨度不能超过 1 个月") when the span > 2592000 and remove the duplicate check from GetUserQuotaDates and the other handler so only the shared validator enforces the limit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@controller/usedata.go`:
- Around line 91-97: GetAllChannelQuotaDates currently calls
model.GetAllChannelQuotaData which scans raw logs with no time-span limit; add a
server-side safety rail by enforcing a maximum allowed range (e.g., 30 or 90
days) on startTimestamp/endTimestamp before calling GetAllChannelQuotaData, or
route admin requests to the pre-aggregated quota table method instead; modify
the GetAllChannelQuotaDates handler to validate and clamp the requested range
and return a 400 or adjusted range if the requested span exceeds the configured
MAX_QUOTA_SPAN, referencing the parseDashboardRange, GetAllChannelQuotaData, and
any pre-aggregated quota retrieval function to locate where to implement the
change.
- Around line 63-76: The 30-day span check is duplicated in GetUserQuotaDates
and the other self endpoint; move that validation into parseDashboardRange (or
create a small shared validateDashboardRange function) so both callers get the
same behavior and response format; update parseDashboardRange to return the same
JSON response (success:false/message:"时间跨度不能超过 1 个月") when the span > 2592000
and remove the duplicate check from GetUserQuotaDates and the other handler so
only the shared validator enforces the limit.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7a84b2e3-643e-4ddd-a381-5efb6c542fd7
📒 Files selected for processing (1)
controller/usedata.go
|
Addressed the remaining dashboard range review comments in cccf06e. Changes:
Validation:
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
🧹 Nitpick comments (1)
controller/usedata_test.go (1)
37-88: Consider adding positive test cases for valid ranges.The tests thoroughly validate rejection of oversized ranges. Consider adding complementary tests for:
- Ranges at exactly the limit (e.g.,
end_timestamp=2592000for 1-month endpoints)- Valid ranges below the limit
This would confirm both sides of the boundary condition and guard against accidental off-by-one changes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controller/usedata_test.go` around lines 37 - 88, Add positive boundary tests that assert valid ranges are accepted: create tests like TestGetUserQuotaDatesAcceptsOneMonthRange and TestGetUserChannelQuotaDatesAcceptsOneMonthRange that call newDashboardRangeTestContext with endpoints "/api/data/self?start_timestamp=0&end_timestamp=2592000" and "/api/data/self/channels?start_timestamp=0&end_timestamp=2592000", set ctx.Set("id", 1), invoke GetUserQuotaDates/GetUserChannelQuotaDates, decode via decodeDashboardRangeResponse and assert response.Success is true (and response.Message is empty or the expected success message). Also add TestGetAllChannelQuotaDatesAcceptsThreeMonthRange using "/api/data/channels?start_timestamp=0&end_timestamp=7776000", call GetAllChannelQuotaDates, decode response and assert success; optionally add one more below-limit case (e.g., end_timestamp=2591999) to each to verify non-edge valid ranges are accepted.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@controller/usedata_test.go`:
- Around line 37-88: Add positive boundary tests that assert valid ranges are
accepted: create tests like TestGetUserQuotaDatesAcceptsOneMonthRange and
TestGetUserChannelQuotaDatesAcceptsOneMonthRange that call
newDashboardRangeTestContext with endpoints
"/api/data/self?start_timestamp=0&end_timestamp=2592000" and
"/api/data/self/channels?start_timestamp=0&end_timestamp=2592000", set
ctx.Set("id", 1), invoke GetUserQuotaDates/GetUserChannelQuotaDates, decode via
decodeDashboardRangeResponse and assert response.Success is true (and
response.Message is empty or the expected success message). Also add
TestGetAllChannelQuotaDatesAcceptsThreeMonthRange using
"/api/data/channels?start_timestamp=0&end_timestamp=7776000", call
GetAllChannelQuotaDates, decode response and assert success; optionally add one
more below-limit case (e.g., end_timestamp=2591999) to each to verify non-edge
valid ranges are accepted.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7658028b-ca02-43b6-ba12-10e86ea4d44b
📒 Files selected for processing (2)
controller/usedata.gocontroller/usedata_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- controller/usedata.go
f8d9df6 to
bf58931
Compare
📝 变更描述 / Description
🚀 变更类型 / Type of change
🔗 关联任务 / Related Issue
✅ 提交前检查项 / Checklist
📸 运行证明 / Proof of Work
go test ./controller ./model -count=1PATH=/Volumes/Work/code/new-api/web/node_modules/.bin:$PATH eslint web/src/components/dashboard/ChartsPanel.jsx web/src/components/dashboard/index.jsx web/src/hooks/dashboard/useDashboardData.js web/src/hooks/dashboard/useDashboardCharts.jsxPATH=/Volumes/Work/code/new-api/web/node_modules/.bin:$PATH vite build