Skip to content

Add dashboard channel trend ranking chart#3395

Open
MisonL wants to merge 4 commits intoQuantumNous:mainfrom
MisonL:feat/dashboard-channel-trend-ranking
Open

Add dashboard channel trend ranking chart#3395
MisonL wants to merge 4 commits intoQuantumNous:mainfrom
MisonL:feat/dashboard-channel-trend-ranking

Conversation

@MisonL
Copy link

@MisonL MisonL commented Mar 22, 2026

📝 变更描述 / Description

  • 在 dashboard 中新增按渠道聚合的趋势排名图表。
  • 后端增加对应的渠道维度聚合接口,并按用户 / 管理员范围限制时间跨度。
  • 前端仅在目标图表页签激活时请求渠道趋势数据,避免无意义请求。
  • 补齐时间范围顺序与跨度校验,避免非法查询参数。

🚀 变更类型 / Type of change

  • 🐛 Bug 修复 (Bug fix)
  • ✨ 新功能 (New feature)
  • ⚡ 性能优化 / 重构 (Refactor)
  • 📝 文档更新 (Documentation)

🔗 关联任务 / Related Issue

✅ 提交前检查项 / Checklist

  • 人工确认: 我已亲自撰写此描述,去除了 AI 原始输出的冗余。
  • 深度理解: 我已完全理解这些更改的工作原理及潜在影响。
  • 范围聚焦: 本 PR 未包含任何与当前任务无关的代码改动。
  • 本地验证: 已在本地运行并通过了测试或手动验证。
  • 安全合规: 代码中无敏感凭据,且符合项目代码规范。

📸 运行证明 / Proof of Work

  • go test ./controller ./model -count=1
  • PATH=/Volumes/Work/code/new-api/web/node_modules/.bin:$PATH eslint web/src/components/dashboard/ChartsPanel.jsx web/src/components/dashboard/index.jsx web/src/hooks/dashboard/useDashboardData.js web/src/hooks/dashboard/useDashboardCharts.jsx
  • PATH=/Volumes/Work/code/new-api/web/node_modules/.bin:$PATH vite build

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 22, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds channel-level hourly quota aggregation (model + tests), controller endpoints and date-range validation for channel quota dates, router routes, and frontend support: a new "Channel Trend Ranking" tab, data loader, chart spec, and client-side aggregation/visualization logic.

Changes

Cohort / File(s) Summary
Controllers
controller/usedata.go, controller/usedata_test.go
Added parseDashboardRange helper and max-span constants; added GetAllChannelQuotaDates and GetUserChannelQuotaDates; updated GetAllQuotaDates/GetUserQuotaDates to use the shared validator and return 400 on invalid ranges; added controller tests for range enforcement.
Models & Tests
model/usedata.go, model/usedata_channel_test.go
New ChannelQuotaData type; DB-specific hour-bucket expression; aggregation query and helpers (getChannelQuotaData, attachChannelNames); exported APIs GetAllChannelQuotaData, GetChannelQuotaDataByUserId, GetChannelQuotaDataByUsername; added test TestGetAllChannelQuotaData.
Router
router/api-router.go
Registered new routes: GET /api/data/channels (admin) and GET /api/data/self/channels (user).
Frontend — Components
web/src/components/dashboard/ChartsPanel.jsx, web/src/components/dashboard/index.jsx
Added tab key '5' ("渠道趋势排行" / Channel Trend Ranking); renamed tab handler prop to handleChartTabChange; wired tab switching, refresh, and search to load channel trend data and pass spec_channel_rank_trend into ChartsPanel.
Frontend — Hooks / Data
web/src/hooks/dashboard/useDashboardCharts.jsx, web/src/hooks/dashboard/useDashboardData.js
New chart spec spec_channel_rank_trend, updateChannelTrendData for normalizing/aggregating top-N channels with color generation; added loadChannelQuotaData to call new endpoints and return sorted channel data (inserts placeholder when empty).
i18n
web/src/i18n/locales/{en,fr,ja,ru,vi,zh-CN,zh-TW}.json
Added translation key "渠道趋势排行" (Channel Trend Ranking) across multiple locales.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Frontend
    participant Controller
    participant Model
    participant Database

    User->>Frontend: select "Channel Trend Ranking" tab
    Frontend->>Frontend: handleChartTabChange('5') / loadChannelTrendChart()
    Frontend->>Controller: GET /api/data/channels or /api/data/self/channels (start,end,username)
    Controller->>Controller: parseDashboardRange (validate start/end, enforce max span)
    Controller->>Model: GetAllChannelQuotaData / GetChannelQuotaDataByUserId / GetChannelQuotaDataByUsername
    Model->>Database: query logs WHERE type=Consume AND time BETWEEN start..end GROUP BY channel_id, hour_bucket
    Database-->>Model: aggregated rows
    Model->>Model: attachChannelNames (cache -> channels table)
    Model-->>Controller: channel quota data JSON
    Controller-->>Frontend: { success:true/false, message, data }
    Frontend->>Frontend: updateChannelTrendData(data) -> spec_channel_rank_trend
    Frontend-->>User: render stacked bar chart
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 I hopped through logs at dawn's first light,
Counting channels, hour by hour, sprite by sprite.
Colors bloom for the top ten's run,
Bars stack and sing beneath the sun.
Dashboards hum — the rabbit's work is done! 🎨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 47.62% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Add dashboard channel trend ranking chart' directly and clearly summarizes the main change: introducing a new dashboard feature for displaying channel trend ranking visualizations.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@MisonL
Copy link
Author

MisonL commented Mar 22, 2026

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 22, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
model/usedata.go (1)

180-191: Optional: query only missing channel IDs after cache hits.

Right now the DB lookup can re-fetch IDs already resolved from memory cache. Restricting the IN list to unresolved IDs will reduce unnecessary DB load on hot dashboards.

♻️ Suggested optimization
-	if len(channelNames) < len(ids) {
+	if len(channelNames) < len(ids) {
+		missingIDs := make([]int, 0, len(ids)-len(channelNames))
+		for _, id := range ids {
+			if _, ok := channelNames[id]; !ok {
+				missingIDs = append(missingIDs, id)
+			}
+		}
+		if len(missingIDs) == 0 {
+			goto assign_names
+		}
 		type channelRow struct {
 			Id   int    `gorm:"column:id"`
 			Name string `gorm:"column:name"`
 		}
 		rows := make([]channelRow, 0, len(ids))
-		if err := DB.Table("channels").Select("id, name").Where("id IN ?", ids).Find(&rows).Error; err != nil {
+		if err := DB.Table("channels").Select("id, name").Where("id IN ?", missingIDs).Find(&rows).Error; err != nil {
 			return err
 		}
 		for _, row := range rows {
 			channelNames[row.Id] = row.Name
 		}
 	}
 
+assign_names:
 	for _, item := range channelData {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@model/usedata.go` around lines 180 - 191, The DB query fetches all IDs in ids
even if some names are already present in channelNames; modify the logic in the
block that defines channelRow and performs
DB.Table("channels").Select(...).Where("id IN ?", ids).Find(&rows) to instead
build a slice of only missing IDs (e.g., compute missingIDs by iterating ids and
checking channelNames[id] presence) and use that slice in the WHERE IN query so
only unresolved channel IDs are fetched and then map results back into
channelNames.
web/src/hooks/dashboard/useDashboardCharts.jsx (1)

498-523: Please confirm channel_name is unique before grouping on it.

Lines 499-501 use the display name as the series key, so duplicate channel names would be merged into one trend/rank entry. If names are not globally unique, group by channel_id and keep the name only as the label.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web/src/hooks/dashboard/useDashboardCharts.jsx` around lines 498 - 523, The
code currently uses channel_name as the series key (in normalizedData →
model_name) which can merge distinct channels that share a display name; change
the grouping key to channel_id (use channel_id as the unique identifier when
aggregating in aggregateDataByTimeAndModel and when building channelTotals via
updateMapValue) and keep channel_name only as a label for display (e.g., attach
name as a separate property on normalizedData items). Ensure
topChannels/topChannelSet and the filteredData logic use the channel_id key
while any UI rendering uses the channel_name label.
model/usedata_channel_test.go (1)

10-112: Please add a non-SQLite check for this aggregation path.

This locks in one backend, but the new hour-bucketing logic is exactly the kind of query that tends to diverge across MySQL/PostgreSQL. A small dialect-matrix test or targeted verification would make this a lot safer before release.

As per coding guidelines "All database code MUST be fully compatible with SQLite, MySQL >= 5.7.8, and PostgreSQL >= 9.6 simultaneously."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@model/usedata_channel_test.go` around lines 10 - 112, The test currently only
verifies aggregation against SQLite; add a non-SQLite branch to exercise the
same aggregation path on other dialects by detecting the active dialect (e.g.,
inspect DB.Dialector.Name() or your project's common.DB/dialect helper) inside
TestGetAllChannelQuotaData and, when the dialect is not "sqlite", run the same
setup and assertions (or duplicate assertions) against GetAllChannelQuotaData
and GetChannelQuotaDataByUserId to validate hour-bucketing on MySQL/Postgres;
reference the existing test function TestGetAllChannelQuotaData and the
functions GetAllChannelQuotaData/GetChannelQuotaDataByUserId when adding the
conditional non-sqlite verification.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@controller/usedata.go`:
- Around line 77-84: The code ignores strconv.ParseInt errors for
start_timestamp and end_timestamp causing invalid inputs (e.g. "foo") to default
to 0 and bypass the 30-day guard; update the handler in usedata.go to check the
errors returned from strconv.ParseInt when parsing c.Query("start_timestamp")
and c.Query("end_timestamp") (the startTimestamp and endTimestamp variables) and
immediately return a JSON error (same shape: success:false and message
describing malformed timestamp) with http.StatusBadRequest if either parse
fails, before applying the endTimestamp-startTimestamp > 2592000 check.

---

Nitpick comments:
In `@model/usedata_channel_test.go`:
- Around line 10-112: The test currently only verifies aggregation against
SQLite; add a non-SQLite branch to exercise the same aggregation path on other
dialects by detecting the active dialect (e.g., inspect DB.Dialector.Name() or
your project's common.DB/dialect helper) inside TestGetAllChannelQuotaData and,
when the dialect is not "sqlite", run the same setup and assertions (or
duplicate assertions) against GetAllChannelQuotaData and
GetChannelQuotaDataByUserId to validate hour-bucketing on MySQL/Postgres;
reference the existing test function TestGetAllChannelQuotaData and the
functions GetAllChannelQuotaData/GetChannelQuotaDataByUserId when adding the
conditional non-sqlite verification.

In `@model/usedata.go`:
- Around line 180-191: The DB query fetches all IDs in ids even if some names
are already present in channelNames; modify the logic in the block that defines
channelRow and performs DB.Table("channels").Select(...).Where("id IN ?",
ids).Find(&rows) to instead build a slice of only missing IDs (e.g., compute
missingIDs by iterating ids and checking channelNames[id] presence) and use that
slice in the WHERE IN query so only unresolved channel IDs are fetched and then
map results back into channelNames.

In `@web/src/hooks/dashboard/useDashboardCharts.jsx`:
- Around line 498-523: The code currently uses channel_name as the series key
(in normalizedData → model_name) which can merge distinct channels that share a
display name; change the grouping key to channel_id (use channel_id as the
unique identifier when aggregating in aggregateDataByTimeAndModel and when
building channelTotals via updateMapValue) and keep channel_name only as a label
for display (e.g., attach name as a separate property on normalizedData items).
Ensure topChannels/topChannelSet and the filteredData logic use the channel_id
key while any UI rendering uses the channel_name label.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 54d518a9-2330-4692-b881-7e68811c9315

📥 Commits

Reviewing files that changed from the base of the PR and between deff59a and 072bf1b.

📒 Files selected for processing (15)
  • controller/usedata.go
  • model/usedata.go
  • model/usedata_channel_test.go
  • router/api-router.go
  • web/src/components/dashboard/ChartsPanel.jsx
  • web/src/components/dashboard/index.jsx
  • web/src/hooks/dashboard/useDashboardCharts.jsx
  • web/src/hooks/dashboard/useDashboardData.js
  • web/src/i18n/locales/en.json
  • web/src/i18n/locales/fr.json
  • web/src/i18n/locales/ja.json
  • web/src/i18n/locales/ru.json
  • web/src/i18n/locales/vi.json
  • web/src/i18n/locales/zh-CN.json
  • web/src/i18n/locales/zh-TW.json

@MisonL
Copy link
Author

MisonL commented Mar 23, 2026

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 23, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
controller/usedata.go (1)

13-33: ⚠️ Potential issue | 🟡 Minor

Validate timestamp ordering in parseDashboardRange.

start_timestamp and end_timestamp are parse-validated, but reversed ranges are still accepted. Reject end_timestamp < start_timestamp early to keep range semantics consistent and avoid silent empty-result queries.

Suggested fix
 func parseDashboardRange(c *gin.Context) (int64, int64, bool) {
 	startTimestamp, err := strconv.ParseInt(c.Query("start_timestamp"), 10, 64)
 	if err != nil {
 		c.JSON(http.StatusBadRequest, gin.H{
 			"success": false,
 			"message": "start_timestamp 格式错误",
 		})
 		return 0, 0, false
 	}
 
 	endTimestamp, err := strconv.ParseInt(c.Query("end_timestamp"), 10, 64)
 	if err != nil {
 		c.JSON(http.StatusBadRequest, gin.H{
 			"success": false,
 			"message": "end_timestamp 格式错误",
 		})
 		return 0, 0, false
 	}
+	if endTimestamp < startTimestamp {
+		c.JSON(http.StatusBadRequest, gin.H{
+			"success": false,
+			"message": "时间范围不合法",
+		})
+		return 0, 0, false
+	}
 
 	return startTimestamp, endTimestamp, true
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@controller/usedata.go` around lines 13 - 33, In parseDashboardRange, after
parsing startTimestamp and endTimestamp, validate that endTimestamp >=
startTimestamp and return a 400 response if not; update the function
(parseDashboardRange) to check the ordering of the parsed startTimestamp and
endTimestamp, call c.JSON with success:false and a clear message like
"end_timestamp must be >= start_timestamp", and return 0, 0, false to abort
processing when the range is reversed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@controller/usedata.go`:
- Around line 13-33: In parseDashboardRange, after parsing startTimestamp and
endTimestamp, validate that endTimestamp >= startTimestamp and return a 400
response if not; update the function (parseDashboardRange) to check the ordering
of the parsed startTimestamp and endTimestamp, call c.JSON with success:false
and a clear message like "end_timestamp must be >= start_timestamp", and return
0, 0, false to abort processing when the range is reversed.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 6e59c322-179f-4336-880c-fcdaaf9485c3

📥 Commits

Reviewing files that changed from the base of the PR and between 072bf1b and f362ee5.

📒 Files selected for processing (4)
  • controller/usedata.go
  • model/usedata.go
  • model/usedata_channel_test.go
  • web/src/hooks/dashboard/useDashboardCharts.jsx
✅ Files skipped from review due to trivial changes (1)
  • model/usedata_channel_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • model/usedata.go
  • web/src/hooks/dashboard/useDashboardCharts.jsx

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
controller/usedata.go (2)

91-97: Consider a safety rail on the admin channel aggregation endpoint.

Line 97 delegates to model.GetAllChannelQuotaData, which reads from logs rather than the pre-aggregated quota table. The self variant already needs a 30-day cap at Lines 116-121, so this admin path is now the only unbounded log-backed dashboard query. A generous server-side max span or a pre-aggregated path would reduce the blast radius of large full-history requests.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@controller/usedata.go` around lines 91 - 97, GetAllChannelQuotaDates
currently calls model.GetAllChannelQuotaData which scans raw logs with no
time-span limit; add a server-side safety rail by enforcing a maximum allowed
range (e.g., 30 or 90 days) on startTimestamp/endTimestamp before calling
GetAllChannelQuotaData, or route admin requests to the pre-aggregated quota
table method instead; modify the GetAllChannelQuotaDates handler to validate and
clamp the requested range and return a 400 or adjusted range if the requested
span exceeds the configured MAX_QUOTA_SPAN, referencing the parseDashboardRange,
GetAllChannelQuotaData, and any pre-aggregated quota retrieval function to
locate where to implement the change.

63-76: Fold the 30-day self-view guard into shared validation too.

Lines 70-75 and 116-121 now duplicate the same limit and response body. Pulling that into parseDashboardRange (or a tiny follow-up validator) will keep the two self endpoints from drifting on future message/status/max-span changes.

Also applies to: 109-121

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@controller/usedata.go` around lines 63 - 76, The 30-day span check is
duplicated in GetUserQuotaDates and the other self endpoint; move that
validation into parseDashboardRange (or create a small shared
validateDashboardRange function) so both callers get the same behavior and
response format; update parseDashboardRange to return the same JSON response
(success:false/message:"时间跨度不能超过 1 个月") when the span > 2592000 and remove the
duplicate check from GetUserQuotaDates and the other handler so only the shared
validator enforces the limit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@controller/usedata.go`:
- Around line 91-97: GetAllChannelQuotaDates currently calls
model.GetAllChannelQuotaData which scans raw logs with no time-span limit; add a
server-side safety rail by enforcing a maximum allowed range (e.g., 30 or 90
days) on startTimestamp/endTimestamp before calling GetAllChannelQuotaData, or
route admin requests to the pre-aggregated quota table method instead; modify
the GetAllChannelQuotaDates handler to validate and clamp the requested range
and return a 400 or adjusted range if the requested span exceeds the configured
MAX_QUOTA_SPAN, referencing the parseDashboardRange, GetAllChannelQuotaData, and
any pre-aggregated quota retrieval function to locate where to implement the
change.
- Around line 63-76: The 30-day span check is duplicated in GetUserQuotaDates
and the other self endpoint; move that validation into parseDashboardRange (or
create a small shared validateDashboardRange function) so both callers get the
same behavior and response format; update parseDashboardRange to return the same
JSON response (success:false/message:"时间跨度不能超过 1 个月") when the span > 2592000
and remove the duplicate check from GetUserQuotaDates and the other handler so
only the shared validator enforces the limit.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7a84b2e3-643e-4ddd-a381-5efb6c542fd7

📥 Commits

Reviewing files that changed from the base of the PR and between f362ee5 and 5b7c17b.

📒 Files selected for processing (1)
  • controller/usedata.go

@MisonL
Copy link
Author

MisonL commented Mar 24, 2026

Addressed the remaining dashboard range review comments in cccf06e.

Changes:

  • moved dashboard range span enforcement into shared parsing logic to keep the self model/channel endpoints aligned
  • added a 3-month server-side safety rail for the admin channel trend endpoint to avoid unbounded raw-log scans
  • added controller tests for the self 1-month limit and the admin channel 3-month limit

Validation:

  • go test ./controller ./model

@MisonL
Copy link
Author

MisonL commented Mar 24, 2026

@coderabbitai review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 24, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
controller/usedata_test.go (1)

37-88: Consider adding positive test cases for valid ranges.

The tests thoroughly validate rejection of oversized ranges. Consider adding complementary tests for:

  • Ranges at exactly the limit (e.g., end_timestamp=2592000 for 1-month endpoints)
  • Valid ranges below the limit

This would confirm both sides of the boundary condition and guard against accidental off-by-one changes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@controller/usedata_test.go` around lines 37 - 88, Add positive boundary tests
that assert valid ranges are accepted: create tests like
TestGetUserQuotaDatesAcceptsOneMonthRange and
TestGetUserChannelQuotaDatesAcceptsOneMonthRange that call
newDashboardRangeTestContext with endpoints
"/api/data/self?start_timestamp=0&end_timestamp=2592000" and
"/api/data/self/channels?start_timestamp=0&end_timestamp=2592000", set
ctx.Set("id", 1), invoke GetUserQuotaDates/GetUserChannelQuotaDates, decode via
decodeDashboardRangeResponse and assert response.Success is true (and
response.Message is empty or the expected success message). Also add
TestGetAllChannelQuotaDatesAcceptsThreeMonthRange using
"/api/data/channels?start_timestamp=0&end_timestamp=7776000", call
GetAllChannelQuotaDates, decode response and assert success; optionally add one
more below-limit case (e.g., end_timestamp=2591999) to each to verify non-edge
valid ranges are accepted.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@controller/usedata_test.go`:
- Around line 37-88: Add positive boundary tests that assert valid ranges are
accepted: create tests like TestGetUserQuotaDatesAcceptsOneMonthRange and
TestGetUserChannelQuotaDatesAcceptsOneMonthRange that call
newDashboardRangeTestContext with endpoints
"/api/data/self?start_timestamp=0&end_timestamp=2592000" and
"/api/data/self/channels?start_timestamp=0&end_timestamp=2592000", set
ctx.Set("id", 1), invoke GetUserQuotaDates/GetUserChannelQuotaDates, decode via
decodeDashboardRangeResponse and assert response.Success is true (and
response.Message is empty or the expected success message). Also add
TestGetAllChannelQuotaDatesAcceptsThreeMonthRange using
"/api/data/channels?start_timestamp=0&end_timestamp=7776000", call
GetAllChannelQuotaDates, decode response and assert success; optionally add one
more below-limit case (e.g., end_timestamp=2591999) to each to verify non-edge
valid ranges are accepted.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7658028b-ca02-43b6-ba12-10e86ea4d44b

📥 Commits

Reviewing files that changed from the base of the PR and between 5b7c17b and cccf06e.

📒 Files selected for processing (2)
  • controller/usedata.go
  • controller/usedata_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • controller/usedata.go

@MisonL MisonL force-pushed the feat/dashboard-channel-trend-ranking branch from f8d9df6 to bf58931 Compare March 26, 2026 09:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant