Add growth rate and VLF count columns (#567)#625
Conversation
Update Lite/Database/DuckDbInitializer.cs Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…udioESSilva/PerformanceMonitor into feature/GrowthRates-VLFCount
Resolves XAML conflict by taking dev's FinOpsContent.xaml and adding the two new Database Sizes columns (Auto Growth, VLF Count) from ClaudioESSilva's feature/GrowthRates-VLFCount branch. All other files auto-merged cleanly. Changes from PR #608: - Dashboard + Lite: new GrowthDisplay and VlfCountDisplay columns on Database Sizes grid - SQL collector: collect file growth settings and VLF counts - Upgrade script: 2.2.0-to-2.3.0/03_add_growth_vlf_columns.sql - Dashboard: DataGridFilterManager for Database Sizes column filtering - DuckDB schema: new growth/VLF columns in database_size_stats Co-Authored-By: Cláudio Silva <ClaudioESSilva@users.noreply.github.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds percent-growth and VLF columns (is_percent_growth, growth_pct, vlf_count) across collection SQL, upgrade scripts, local DuckDB schema/migration, remote/local collection services, dashboard/lite UI columns, and implements per-column DataGrid filtering with a new DataGridFilterManager and filter popups. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (2)
upgrades/2.2.0-to-2.3.0/03_add_growth_vlf_columns.sql (1)
14-23: AddSET NOCOUNT ONfor migration script consistency.Please add
SET NOCOUNT ON;near the session settings so rowcount messages don’t pollute upgrade logs.As per coding guidelines, "
**/*.sql: ... proper use of SET NOCOUNT ON ...".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/2.2.0-to-2.3.0/03_add_growth_vlf_columns.sql` around lines 14 - 23, Add SET NOCOUNT ON; to the migration script session-setting block alongside the other SET ... lines (e.g., with SET ANSI_NULLS, SET ANSI_PADDING, SET ANSI_WARNINGS, etc.) so rowcount messages are suppressed during the upgrade; place the statement near the existing SET statements at the top of 03_add_growth_vlf_columns.sql to follow the coding guideline requiring proper use of SET NOCOUNT ON in *.sql migrations.Dashboard/Services/DataGridFilterManager.cs (1)
22-25: Expose a read-only filter view.Callers can mutate
Filtersdirectly and bypassSetFilter(), which leaves the grid contents and header glyphs stale. This interface only needs lookup semantics.♻️ Suggested refactor
public interface IDataGridFilterManager { - Dictionary<string, ColumnFilterState> Filters { get; } + IReadOnlyDictionary<string, ColumnFilterState> Filters { get; } void SetFilter(ColumnFilterState filterState); void UpdateFilterButtonStyles(); } ... - public Dictionary<string, ColumnFilterState> Filters => _filters; + public IReadOnlyDictionary<string, ColumnFilterState> Filters => _filters;Also applies to: 44-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dashboard/Services/DataGridFilterManager.cs` around lines 22 - 25, The Filters property on IDataGridFilterManager exposes a mutable Dictionary which allows callers to bypass SetFilter and leave UI state inconsistent; change the signature of IDataGridFilterManager.Filters from Dictionary<string, ColumnFilterState> to IReadOnlyDictionary<string, ColumnFilterState> (or IReadOnlyCollection/ IReadOnlyList if more appropriate) so callers only get lookup semantics, and update the implementing class(es) to return a read-only wrapper or an immutable snapshot while keeping SetFilter(ColumnFilterState) as the only mutating API that updates grid contents and header glyphs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Dashboard/Controls/FinOpsContent.xaml`:
- Around line 1063-1073: Multiple column filter buttons in the
DatabaseSizesDataGrid are wired to two different click handlers
(DatabaseSizesFilter_Click and FinOpsFilter_Click) causing split filter state;
pick a single handler (preferably DatabaseSizesFilter_Click) and update all
column Button Click attributes in FinOpsContent.xaml to use that handler, then
consolidate any filter logic by moving or merging code from FinOpsFilter_Click
into DatabaseSizesFilter_Click (ensure the chosen handler has the correct
RoutedEventHandler signature and preserves any existing behavior), and remove
the now-unused handler to keep filter state unified.
In `@Dashboard/Controls/FinOpsContent.xaml.cs`:
- Around line 79-80: Initialize _dbSizesFilterMgr before any code path that can
trigger a refresh (e.g., before ServerSelector.SelectedIndex = 0) to avoid null
dereference in the refresh path; specifically, create the
DataGridFilterManager<FinOpsDatabaseSizeStats> instance (assign to
_dbSizesFilterMgr) in the constructor or as a field initializer so that any call
to the refresh/load routine (which dereferences _dbSizesFilterMgr!) from
ServerSelector.SelectedIndex or OnLoaded is safe, or alternatively guard the
refresh code that uses _dbSizesFilterMgr (the calls around lines that
dereference _dbSizesFilterMgr!) with a null-check until initialization is
guaranteed.
In `@Dashboard/Services/DatabaseService.FinOps.cs`:
- Around line 2556-2561: The GrowthDisplay getter treats percent-growth files
with a configured growth of 0 as "0%" but should show "Disabled" to match
fixed-growth behavior; update the IsPercentGrowth true branch in GrowthDisplay
to check for GrowthPct.HasValue && GrowthPct == 0 and return "Disabled" in that
case, otherwise format the percentage (e.g., $"{GrowthPct}%"); keep the null
case returning "-" and the false branch unchanged (AutoGrowthMb == 0 =>
"Disabled" else formatted MB).
In `@install/52_collect_database_size_stats.sql`:
- Around line 167-172: The vlf_count expression in the per-file statistics block
computes VLFs for the whole database using sys.dm_db_log_info(DB_ID()) and does
not filter by file_id, causing duplicate counts for each log file; update the
CASE that assigns vlf_count (the block referencing df.type = 1 and
sys.dm_db_log_info(DB_ID())) to count only rows where file_id = df.file_id (or
the file_id column from the outer query) — apply the same change to both
occurrences of this vlf_count logic in the script so each log file row stores
its own VLF count.
In `@Lite/Database/DuckDbInitializer.cs`:
- Around line 587-601: The migration block in DuckDbInitializer.cs that runs
ALTER TABLE on database_size_stats should first verify the table exists to avoid
aborting initialization; use a lightweight existence check (e.g. query
information_schema.tables or run a scalar like SELECT COUNT(*) ...) via the same
connection and ExecuteScalarAsync before calling ExecuteNonQueryAsync for the
three ALTER statements, and only run them when the table exists; keep logging
with _logger but do not rethrow when the table is absent so subsequent
table-creation logic can proceed.
In `@Lite/Services/LocalDataService.FinOps.cs`:
- Around line 2255-2260: GrowthDisplay currently treats a null AutoGrowthMb as
"Disabled" when IsPercentGrowth == false; change the IsPercentGrowth == false
branch in GrowthDisplay so it distinguishes null (unknown) from zero (disabled):
if AutoGrowthMb is null return "-" (unknown), if AutoGrowthMb == 0 return
"Disabled", otherwise return $"{AutoGrowthMb:N0} MB"; update the expression
referencing GrowthDisplay, IsPercentGrowth and AutoGrowthMb accordingly.
In `@Lite/Services/RemoteCollectorService.DatabaseSize.cs`:
- Around line 120-125: The sys.dm_db_log_info call used to populate vlf_count
(in the vlf_count CASE when mf.type = 1) must be guarded per-database to avoid
aborting collection when the caller lacks VIEW DATABASE PERFORMANCE STATE or
VIEW SERVER STATE; replace the direct unguarded call with a permission check
(e.g., HAS_PERMS_BY_NAME(DB_NAME(mf.database_id),'DATABASE','VIEW DATABASE
PERFORMANCE STATE') OR server-level check) and only call sys.dm_db_log_info when
permitted (otherwise return NULL or -1), and change the VLF counting logic to
compute per-file VLFs by filtering/grouping on file_id (use WHERE file_id =
mf.file_id or COUNT(*) FILTER/CASE by file_id) so each log file row gets its own
VLF count; apply the same guarded-permission and per-file counting fix to the
Azure branch and the other affected block around lines 179-184 where
sys.dm_db_log_info is used.
In `@upgrades/2.2.0-to-2.3.0/03_add_growth_vlf_columns.sql`:
- Around line 31-83: Wrap the ALTER TABLE statements that add is_percent_growth,
growth_pct, and vlf_count on collect.database_size_stats in a TRY/CATCH block
and run them inside an explicit transaction: begin a transaction before the
first ALTER, perform each EXISTS check and corresponding ALTER TABLE inside the
TRY, and on error catch roll back the transaction and re-raise/log the error; on
success commit the transaction. Ensure the logic still prints the existing
status messages (e.g., "is_percent_growth: added...") only after successful
ALTERs and that the TRY/CATCH references the same object checks and column names
used in the current script (collect.database_size_stats, is_percent_growth,
growth_pct, vlf_count).
---
Nitpick comments:
In `@Dashboard/Services/DataGridFilterManager.cs`:
- Around line 22-25: The Filters property on IDataGridFilterManager exposes a
mutable Dictionary which allows callers to bypass SetFilter and leave UI state
inconsistent; change the signature of IDataGridFilterManager.Filters from
Dictionary<string, ColumnFilterState> to IReadOnlyDictionary<string,
ColumnFilterState> (or IReadOnlyCollection/ IReadOnlyList if more appropriate)
so callers only get lookup semantics, and update the implementing class(es) to
return a read-only wrapper or an immutable snapshot while keeping
SetFilter(ColumnFilterState) as the only mutating API that updates grid contents
and header glyphs.
In `@upgrades/2.2.0-to-2.3.0/03_add_growth_vlf_columns.sql`:
- Around line 14-23: Add SET NOCOUNT ON; to the migration script session-setting
block alongside the other SET ... lines (e.g., with SET ANSI_NULLS, SET
ANSI_PADDING, SET ANSI_WARNINGS, etc.) so rowcount messages are suppressed
during the upgrade; place the statement near the existing SET statements at the
top of 03_add_growth_vlf_columns.sql to follow the coding guideline requiring
proper use of SET NOCOUNT ON in *.sql migrations.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d6c888b7-c5be-49d2-a046-da581a16585b
📒 Files selected for processing (14)
Dashboard/Controls/FinOpsContent.xamlDashboard/Controls/FinOpsContent.xaml.csDashboard/Services/DataGridFilterManager.csDashboard/Services/DatabaseService.FinOps.csLite/Controls/FinOpsTab.xamlLite/Database/DuckDbInitializer.csLite/Database/Schema.csLite/Services/LocalDataService.FinOps.csLite/Services/RemoteCollectorService.DatabaseSize.csinstall/02_create_tables.sqlinstall/06_ensure_collection_table.sqlinstall/52_collect_database_size_stats.sqlupgrades/2.2.0-to-2.3.0/03_add_growth_vlf_columns.sqlupgrades/2.2.0-to-2.3.0/upgrade.txt
| <Button Style="{DynamicResource ColumnFilterButtonStyle}" Tag="GrowthDisplay" Click="DatabaseSizesFilter_Click" Margin="0,0,4,0"/> | ||
| <TextBlock Text="Auto Growth" FontWeight="Bold" VerticalAlignment="Center"/> | ||
| </StackPanel> | ||
| </DataGridTextColumn.Header> | ||
| </DataGridTextColumn> | ||
| <DataGridTextColumn Binding="{Binding VlfCountDisplay}" Width="80" SortMemberPath="VlfCountSort"> | ||
| <DataGridTextColumn.Header> | ||
| <StackPanel Orientation="Horizontal"> | ||
| <Button Style="{DynamicResource ColumnFilterButtonStyle}" Tag="VlfCountDisplay" Click="DatabaseSizesFilter_Click" Margin="0,0,4,0"/> | ||
| <TextBlock Text="VLF Count" FontWeight="Bold" VerticalAlignment="Center"/> | ||
| </StackPanel> |
There was a problem hiding this comment.
Unify Database Sizes column filters under one handler.
Line 1063 and Line 1071 use DatabaseSizesFilter_Click, while other DatabaseSizesDataGrid columns still use FinOpsFilter_Click. This splits state across two managers, so filters in the same grid won’t compose reliably and refresh can drop one side’s filter state.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Dashboard/Controls/FinOpsContent.xaml` around lines 1063 - 1073, Multiple
column filter buttons in the DatabaseSizesDataGrid are wired to two different
click handlers (DatabaseSizesFilter_Click and FinOpsFilter_Click) causing split
filter state; pick a single handler (preferably DatabaseSizesFilter_Click) and
update all column Button Click attributes in FinOpsContent.xaml to use that
handler, then consolidate any filter logic by moving or merging code from
FinOpsFilter_Click into DatabaseSizesFilter_Click (ensure the chosen handler has
the correct RoutedEventHandler signature and preserves any existing behavior),
and remove the now-unused handler to keep filter state unified.
| _dbSizesFilterMgr = new DataGridFilterManager<FinOpsDatabaseSizeStats>(DatabaseSizesDataGrid); | ||
| } |
There was a problem hiding this comment.
Initialize _dbSizesFilterMgr before first refresh path can use it.
ServerSelector.SelectedIndex = 0 (Line 93) can trigger data loading before OnLoaded runs. Then Line 634 dereferences _dbSizesFilterMgr!, which can be null on first load.
Proposed fix
public FinOpsContent()
{
InitializeComponent();
+ _dbSizesFilterMgr = new DataGridFilterManager<FinOpsDatabaseSizeStats>(DatabaseSizesDataGrid);
Loaded += OnLoaded;
}
@@
- _dbSizesFilterMgr = new DataGridFilterManager<FinOpsDatabaseSizeStats>(DatabaseSizesDataGrid);
+ _dbSizesFilterMgr ??= new DataGridFilterManager<FinOpsDatabaseSizeStats>(DatabaseSizesDataGrid);
@@
- _dbSizesFilterMgr!.UpdateData(data);
+ _dbSizesFilterMgr ??= new DataGridFilterManager<FinOpsDatabaseSizeStats>(DatabaseSizesDataGrid);
+ _dbSizesFilterMgr.UpdateData(data);As per coding guidelines, "Dashboard/**/*.cs: ... Watch for: null reference risks ...".
Also applies to: 634-635
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Dashboard/Controls/FinOpsContent.xaml.cs` around lines 79 - 80, Initialize
_dbSizesFilterMgr before any code path that can trigger a refresh (e.g., before
ServerSelector.SelectedIndex = 0) to avoid null dereference in the refresh path;
specifically, create the DataGridFilterManager<FinOpsDatabaseSizeStats> instance
(assign to _dbSizesFilterMgr) in the constructor or as a field initializer so
that any call to the refresh/load routine (which dereferences
_dbSizesFilterMgr!) from ServerSelector.SelectedIndex or OnLoaded is safe, or
alternatively guard the refresh code that uses _dbSizesFilterMgr (the calls
around lines that dereference _dbSizesFilterMgr!) with a null-check until
initialization is guaranteed.
| public string GrowthDisplay => IsPercentGrowth switch | ||
| { | ||
| null => "-", | ||
| true => GrowthPct.HasValue ? $"{GrowthPct}%" : "-", | ||
| false => AutoGrowthMb == 0 ? "Disabled" : $"{AutoGrowthMb:N0} MB" | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "DatabaseService.FinOps.cs" | head -5Repository: erikdarlingdata/PerformanceMonitor
Length of output: 124
🏁 Script executed:
# Get file size first
wc -l ./Dashboard/Services/DatabaseService.FinOps.csRepository: erikdarlingdata/PerformanceMonitor
Length of output: 129
🏁 Script executed:
# Read the GrowthDisplay property and surrounding context
sed -n '2540,2580p' ./Dashboard/Services/DatabaseService.FinOps.csRepository: erikdarlingdata/PerformanceMonitor
Length of output: 1757
🏁 Script executed:
# Search for the model class definition and property declarations
rg "class.*Growth|GrowthPct|IsPercentGrowth|AutoGrowthMb" ./Dashboard/Services/DatabaseService.FinOps.cs -B 2 -A 2Repository: erikdarlingdata/PerformanceMonitor
Length of output: 2840
🏁 Script executed:
# Find the SQL query that fetches growth and is_percent_growth
rg "growth|is_percent_growth" ./Dashboard/Services/DatabaseService.FinOps.cs -B 3 -A 3 | head -100Repository: erikdarlingdata/PerformanceMonitor
Length of output: 1646
🏁 Script executed:
# Search for the query that selects from collect.database_size_stats or the underlying source
rg "FROM collect.database_size_stats|sys\.database_files|is_percent_growth" ./Dashboard/Services/DatabaseService.FinOps.cs -B 5 -A 20 | head -150Repository: erikdarlingdata/PerformanceMonitor
Length of output: 5024
🏁 Script executed:
# Look for SQL files in the repo that might contain the query definition
find . -type f \( -name "*.sql" -o -name "*.SQL" \) | head -20Repository: erikdarlingdata/PerformanceMonitor
Length of output: 863
🏁 Script executed:
# Search for any stored procedures or SQL files in Scripts or sql directories
find . -type d \( -name Scripts -o -name sql -o -name SQL \) 2>/dev/null | head -10Repository: erikdarlingdata/PerformanceMonitor
Length of output: 91
🏁 Script executed:
# Find the file that collects database size stats
grep -l "database_size_stats\|is_percent_growth\|growth_pct" ./install/*.sqlRepository: erikdarlingdata/PerformanceMonitor
Length of output: 315
🏁 Script executed:
# Check the delta framework file which likely contains collection logic
grep -l "growth" ./install/*.sqlRepository: erikdarlingdata/PerformanceMonitor
Length of output: 517
🏁 Script executed:
# Read the database size stats collection script
cat ./install/52_collect_database_size_stats.sql | head -300Repository: erikdarlingdata/PerformanceMonitor
Length of output: 11573
🏁 Script executed:
# Check if there's more content and look for the growth logic
wc -l ./install/52_collect_database_size_stats.sqlRepository: erikdarlingdata/PerformanceMonitor
Length of output: 126
🏁 Script executed:
# Read more of the SQL to see the rest of the growth_pct logic
sed -n '200,250p' ./install/52_collect_database_size_stats.sqlRepository: erikdarlingdata/PerformanceMonitor
Length of output: 2166
🏁 Script executed:
# Verify that GrowthPct can be 0 by checking the full property definitions one more time
sed -n '2550,2570p' ./Dashboard/Services/DatabaseService.FinOps.csRepository: erikdarlingdata/PerformanceMonitor
Length of output: 869
Render disabled percent-growth files as Disabled.
SQL Server allows growth = 0 with is_percent_growth = 1 for percent-growth files with no growth configured. The current display logic shows this as 0%, creating an inconsistency with the fixed-growth case where AutoGrowthMb = 0 displays as Disabled.
🔧 Suggested fix
public string GrowthDisplay => IsPercentGrowth switch
{
null => "-",
- true => GrowthPct.HasValue ? $"{GrowthPct}%" : "-",
+ true => !GrowthPct.HasValue ? "-"
+ : GrowthPct.Value == 0 ? "Disabled"
+ : $"{GrowthPct.Value}%",
false => AutoGrowthMb == 0 ? "Disabled" : $"{AutoGrowthMb:N0} MB"
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Dashboard/Services/DatabaseService.FinOps.cs` around lines 2556 - 2561, The
GrowthDisplay getter treats percent-growth files with a configured growth of 0
as "0%" but should show "Disabled" to match fixed-growth behavior; update the
IsPercentGrowth true branch in GrowthDisplay to check for GrowthPct.HasValue &&
GrowthPct == 0 and return "Disabled" in that case, otherwise format the
percentage (e.g., $"{GrowthPct}%"); keep the null case returning "-" and the
false branch unchanged (AutoGrowthMb == 0 => "Disabled" else formatted MB).
| if (fromVersion < 22) | ||
| { | ||
| _logger?.LogInformation("Running migration to v22: adding growth rate and VLF count columns to database_size_stats"); | ||
| try | ||
| { | ||
| await ExecuteNonQueryAsync(connection, "ALTER TABLE database_size_stats ADD COLUMN IF NOT EXISTS is_percent_growth BOOLEAN"); | ||
| await ExecuteNonQueryAsync(connection, "ALTER TABLE database_size_stats ADD COLUMN IF NOT EXISTS growth_pct INTEGER"); | ||
| await ExecuteNonQueryAsync(connection, "ALTER TABLE database_size_stats ADD COLUMN IF NOT EXISTS vlf_count INTEGER"); | ||
| } | ||
| catch (Exception ex) | ||
| { | ||
| _logger?.LogError(ex, "Migration to v22 failed"); | ||
| throw; | ||
| } | ||
| } |
There was a problem hiding this comment.
Guard v22 ALTERs when database_size_stats doesn’t exist yet.
On upgrades from older schemas, Line 592 can run before database_size_stats exists. Because Line 599 rethrows, initialization aborts instead of continuing to table creation.
💡 Suggested fix
if (fromVersion < 22)
{
_logger?.LogInformation("Running migration to v22: adding growth rate and VLF count columns to database_size_stats");
try
{
- await ExecuteNonQueryAsync(connection, "ALTER TABLE database_size_stats ADD COLUMN IF NOT EXISTS is_percent_growth BOOLEAN");
- await ExecuteNonQueryAsync(connection, "ALTER TABLE database_size_stats ADD COLUMN IF NOT EXISTS growth_pct INTEGER");
- await ExecuteNonQueryAsync(connection, "ALTER TABLE database_size_stats ADD COLUMN IF NOT EXISTS vlf_count INTEGER");
+ using var existsCmd = connection.CreateCommand();
+ existsCmd.CommandText = @"
+SELECT COUNT(*)
+FROM information_schema.tables
+WHERE table_name = 'database_size_stats'";
+ var tableExists = Convert.ToInt32(await existsCmd.ExecuteScalarAsync()) > 0;
+
+ if (tableExists)
+ {
+ await ExecuteNonQueryAsync(connection, "ALTER TABLE database_size_stats ADD COLUMN IF NOT EXISTS is_percent_growth BOOLEAN");
+ await ExecuteNonQueryAsync(connection, "ALTER TABLE database_size_stats ADD COLUMN IF NOT EXISTS growth_pct INTEGER");
+ await ExecuteNonQueryAsync(connection, "ALTER TABLE database_size_stats ADD COLUMN IF NOT EXISTS vlf_count INTEGER");
+ }
+ else
+ {
+ _logger?.LogDebug("Skipping v22 ALTERs because database_size_stats does not exist yet; it will be created during table initialization.");
+ }
}
catch (Exception ex)
{
_logger?.LogError(ex, "Migration to v22 failed");
throw;
}
}As per coding guidelines, Lite/**/*.cs should watch for DuckDB access patterns and reliability in migration behavior.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (fromVersion < 22) | |
| { | |
| _logger?.LogInformation("Running migration to v22: adding growth rate and VLF count columns to database_size_stats"); | |
| try | |
| { | |
| await ExecuteNonQueryAsync(connection, "ALTER TABLE database_size_stats ADD COLUMN IF NOT EXISTS is_percent_growth BOOLEAN"); | |
| await ExecuteNonQueryAsync(connection, "ALTER TABLE database_size_stats ADD COLUMN IF NOT EXISTS growth_pct INTEGER"); | |
| await ExecuteNonQueryAsync(connection, "ALTER TABLE database_size_stats ADD COLUMN IF NOT EXISTS vlf_count INTEGER"); | |
| } | |
| catch (Exception ex) | |
| { | |
| _logger?.LogError(ex, "Migration to v22 failed"); | |
| throw; | |
| } | |
| } | |
| if (fromVersion < 22) | |
| { | |
| _logger?.LogInformation("Running migration to v22: adding growth rate and VLF count columns to database_size_stats"); | |
| try | |
| { | |
| using var existsCmd = connection.CreateCommand(); | |
| existsCmd.CommandText = @" | |
| SELECT COUNT(*) | |
| FROM information_schema.tables | |
| WHERE table_name = 'database_size_stats'"; | |
| var tableExists = Convert.ToInt32(await existsCmd.ExecuteScalarAsync()) > 0; | |
| if (tableExists) | |
| { | |
| await ExecuteNonQueryAsync(connection, "ALTER TABLE database_size_stats ADD COLUMN IF NOT EXISTS is_percent_growth BOOLEAN"); | |
| await ExecuteNonQueryAsync(connection, "ALTER TABLE database_size_stats ADD COLUMN IF NOT EXISTS growth_pct INTEGER"); | |
| await ExecuteNonQueryAsync(connection, "ALTER TABLE database_size_stats ADD COLUMN IF NOT EXISTS vlf_count INTEGER"); | |
| } | |
| else | |
| { | |
| _logger?.LogDebug("Skipping v22 ALTERs because database_size_stats does not exist yet; it will be created during table initialization."); | |
| } | |
| } | |
| catch (Exception ex) | |
| { | |
| _logger?.LogError(ex, "Migration to v22 failed"); | |
| throw; | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Lite/Database/DuckDbInitializer.cs` around lines 587 - 601, The migration
block in DuckDbInitializer.cs that runs ALTER TABLE on database_size_stats
should first verify the table exists to avoid aborting initialization; use a
lightweight existence check (e.g. query information_schema.tables or run a
scalar like SELECT COUNT(*) ...) via the same connection and ExecuteScalarAsync
before calling ExecuteNonQueryAsync for the three ALTER statements, and only run
them when the table exists; keep logging with _logger but do not rethrow when
the table is absent so subsequent table-creation logic can proceed.
| public string GrowthDisplay => IsPercentGrowth switch | ||
| { | ||
| null => "-", | ||
| true => GrowthPct.HasValue ? $"{GrowthPct}%" : "-", | ||
| false => AutoGrowthMb == null || AutoGrowthMb == 0 ? "Disabled" : $"{AutoGrowthMb:N0} MB" | ||
| }; |
There was a problem hiding this comment.
GrowthDisplay can mislabel missing fixed-growth values as "Disabled".
When IsPercentGrowth == false and AutoGrowthMb is null, the UI currently shows "Disabled" instead of an unknown state, which can misrepresent incomplete data.
💡 Proposed fix
public string GrowthDisplay => IsPercentGrowth switch
{
null => "-",
true => GrowthPct.HasValue ? $"{GrowthPct}%" : "-",
- false => AutoGrowthMb == null || AutoGrowthMb == 0 ? "Disabled" : $"{AutoGrowthMb:N0} MB"
+ false => !AutoGrowthMb.HasValue ? "-" :
+ AutoGrowthMb.Value == 0 ? "Disabled" : $"{AutoGrowthMb:N0} MB"
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| public string GrowthDisplay => IsPercentGrowth switch | |
| { | |
| null => "-", | |
| true => GrowthPct.HasValue ? $"{GrowthPct}%" : "-", | |
| false => AutoGrowthMb == null || AutoGrowthMb == 0 ? "Disabled" : $"{AutoGrowthMb:N0} MB" | |
| }; | |
| public string GrowthDisplay => IsPercentGrowth switch | |
| { | |
| null => "-", | |
| true => GrowthPct.HasValue ? $"{GrowthPct}%" : "-", | |
| false => !AutoGrowthMb.HasValue ? "-" : | |
| AutoGrowthMb.Value == 0 ? "Disabled" : $"{AutoGrowthMb:N0} MB" | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Lite/Services/LocalDataService.FinOps.cs` around lines 2255 - 2260,
GrowthDisplay currently treats a null AutoGrowthMb as "Disabled" when
IsPercentGrowth == false; change the IsPercentGrowth == false branch in
GrowthDisplay so it distinguishes null (unknown) from zero (disabled): if
AutoGrowthMb is null return "-" (unknown), if AutoGrowthMb == 0 return
"Disabled", otherwise return $"{AutoGrowthMb:N0} MB"; update the expression
referencing GrowthDisplay, IsPercentGrowth and AutoGrowthMb accordingly.
| is_percent_growth = | ||
| mf.is_percent_growth, | ||
| growth_pct = | ||
| CASE WHEN mf.is_percent_growth = 1 THEN mf.growth ELSE NULL END, | ||
| vlf_count = | ||
| CASE WHEN mf.type = 1 /*LOG*/ THEN (SELECT COUNT(*) FROM sys.dm_db_log_info(mf.database_id)) ELSE NULL END |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
wc -l Lite/Services/RemoteCollectorService.DatabaseSize.csRepository: erikdarlingdata/PerformanceMonitor
Length of output: 134
🏁 Script executed:
cat -n Lite/Services/RemoteCollectorService.DatabaseSize.cs | head -150Repository: erikdarlingdata/PerformanceMonitor
Length of output: 5703
🏁 Script executed:
cat -n Lite/Services/RemoteCollectorService.DatabaseSize.cs | sed -n '100,250p'Repository: erikdarlingdata/PerformanceMonitor
Length of output: 6799
🌐 Web query:
SQL Server sys.dm_db_log_info permissions requirements database_id parameter
💡 Result:
sys.dm_db_log_info ( database_id )
database_id parameter
- Type:
int - Valid inputs: a database ID,
NULL, orDEFAULT. NULLandDEFAULTare equivalent and mean “current database”.- You can pass
DB_ID()(andDB_ID()without a name requires the current database compatibility level to be 90+). [1]
Permissions required
- SQL Server 2022 (16.x) and later: requires
VIEW DATABASE PERFORMANCE STATEon the target database. [1] - Older versions that support this DMF (SQL Server 2016 SP2+ through SQL Server 2019): requires
VIEW SERVER STATE. [1]
Source
- Microsoft Learn:
sys.dm_db_log_info (Transact-SQL)[1]
Unguarded sys.dm_db_log_info call risks collection failure; VLF counts are duplicated across log files instead of per-file.
The on-prem outer query at line 125 calls sys.dm_db_log_info(mf.database_id) without a permission guard. SQL Server 2022+ requires VIEW DATABASE PERFORMANCE STATE on the target database; older versions require VIEW SERVER STATE. If any online database lacks this permission, the DMV call fails and blocks the entire collection (no try-catch protects this). Additionally, both on-prem and Azure branches count all VLFs in the database and assign that total to every log file row, rather than computing per-file VLF counts via file_id grouping.
🔧 Suggested shape
+CREATE TABLE `#vlf_counts`
+(
+ database_id int NOT NULL,
+ file_id int NOT NULL,
+ vlf_count int NOT NULL
+);
...
+INSERT `#vlf_counts` (database_id, file_id, vlf_count)
+SELECT
+ DB_ID(),
+ li.file_id,
+ COUNT(*)
+FROM sys.dm_db_log_info(DB_ID()) AS li
+GROUP BY
+ li.file_id;
...
- vlf_count =
- CASE WHEN mf.type = 1 /*LOG*/ THEN (SELECT COUNT(*) FROM sys.dm_db_log_info(mf.database_id)) ELSE NULL END
+ vlf_count = vc.vlf_count
...
+LEFT JOIN `#vlf_counts` AS vc
+ ON vc.database_id = mf.database_id
+ AND vc.file_id = mf.file_idAnd in the Azure branch:
- vlf_count =
- CASE WHEN df.type = 1 /*LOG*/ THEN (SELECT COUNT(*) FROM sys.dm_db_log_info(DB_ID())) ELSE NULL END
+ vlf_count =
+ CASE
+ WHEN df.type = 1 /*LOG*/
+ THEN
+ (
+ SELECT COUNT(*)
+ FROM sys.dm_db_log_info(DB_ID()) AS li
+ WHERE li.file_id = df.file_id
+ )
+ ELSE NULL
+ ENDAlso applies to: 179-184
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Lite/Services/RemoteCollectorService.DatabaseSize.cs` around lines 120 - 125,
The sys.dm_db_log_info call used to populate vlf_count (in the vlf_count CASE
when mf.type = 1) must be guarded per-database to avoid aborting collection when
the caller lacks VIEW DATABASE PERFORMANCE STATE or VIEW SERVER STATE; replace
the direct unguarded call with a permission check (e.g.,
HAS_PERMS_BY_NAME(DB_NAME(mf.database_id),'DATABASE','VIEW DATABASE PERFORMANCE
STATE') OR server-level check) and only call sys.dm_db_log_info when permitted
(otherwise return NULL or -1), and change the VLF counting logic to compute
per-file VLFs by filtering/grouping on file_id (use WHERE file_id = mf.file_id
or COUNT(*) FILTER/CASE by file_id) so each log file row gets its own VLF count;
apply the same guarded-permission and per-file counting fix to the Azure branch
and the other affected block around lines 179-184 where sys.dm_db_log_info is
used.
| IF OBJECT_ID(N'collect.database_size_stats', N'U') IS NOT NULL | ||
| BEGIN | ||
| PRINT 'Checking collect.database_size_stats columns...'; | ||
|
|
||
| IF NOT EXISTS | ||
| ( | ||
| SELECT | ||
| 1/0 | ||
| FROM INFORMATION_SCHEMA.COLUMNS | ||
| WHERE TABLE_SCHEMA = N'collect' | ||
| AND TABLE_NAME = N'database_size_stats' | ||
| AND COLUMN_NAME = N'is_percent_growth' | ||
| ) | ||
| BEGIN | ||
| ALTER TABLE collect.database_size_stats ADD is_percent_growth bit NULL; | ||
| PRINT ' is_percent_growth: added (bit NULL)'; | ||
| END; | ||
|
|
||
| IF NOT EXISTS | ||
| ( | ||
| SELECT | ||
| 1/0 | ||
| FROM INFORMATION_SCHEMA.COLUMNS | ||
| WHERE TABLE_SCHEMA = N'collect' | ||
| AND TABLE_NAME = N'database_size_stats' | ||
| AND COLUMN_NAME = N'growth_pct' | ||
| ) | ||
| BEGIN | ||
| ALTER TABLE collect.database_size_stats ADD growth_pct integer NULL; | ||
| PRINT ' growth_pct: added (integer NULL)'; | ||
| END; | ||
|
|
||
| IF NOT EXISTS | ||
| ( | ||
| SELECT | ||
| 1/0 | ||
| FROM INFORMATION_SCHEMA.COLUMNS | ||
| WHERE TABLE_SCHEMA = N'collect' | ||
| AND TABLE_NAME = N'database_size_stats' | ||
| AND COLUMN_NAME = N'vlf_count' | ||
| ) | ||
| BEGIN | ||
| ALTER TABLE collect.database_size_stats ADD vlf_count integer NULL; | ||
| PRINT ' vlf_count: added (integer NULL)'; | ||
| END; | ||
|
|
||
| PRINT 'database_size_stats complete.'; | ||
| END; | ||
| ELSE | ||
| BEGIN | ||
| PRINT 'collect.database_size_stats not found — skipping.'; | ||
| END; | ||
| GO |
There was a problem hiding this comment.
Wrap DDL changes in TRY/CATCH with transaction rollback.
Line 31 onward performs multiple schema mutations without structured error handling. If one ALTER TABLE fails, the upgrade can leave a partially-updated schema.
Proposed hardening
+SET XACT_ABORT ON;
+
+BEGIN TRY
+ BEGIN TRAN;
+
IF OBJECT_ID(N'collect.database_size_stats', N'U') IS NOT NULL
BEGIN
PRINT 'Checking collect.database_size_stats columns...';
@@
PRINT 'database_size_stats complete.';
END;
ELSE
BEGIN
PRINT 'collect.database_size_stats not found — skipping.';
END;
+
+ COMMIT TRAN;
+END TRY
+BEGIN CATCH
+ IF @@TRANCOUNT > 0
+ ROLLBACK TRAN;
+ THROW;
+END CATCH;As per coding guidelines, "**/*.sql: ... watch for ... missing error handling (TRY/CATCH) ...".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@upgrades/2.2.0-to-2.3.0/03_add_growth_vlf_columns.sql` around lines 31 - 83,
Wrap the ALTER TABLE statements that add is_percent_growth, growth_pct, and
vlf_count on collect.database_size_stats in a TRY/CATCH block and run them
inside an explicit transaction: begin a transaction before the first ALTER,
perform each EXISTS check and corresponding ALTER TABLE inside the TRY, and on
error catch roll back the transaction and re-raise/log the error; on success
commit the transaction. Ensure the logic still prints the existing status
messages (e.g., "is_percent_growth: added...") only after successful ALTERs and
that the TRY/CATCH references the same object checks and column names used in
the current script (collect.database_size_stats, is_percent_growth, growth_pct,
vlf_count).
- collection_health view: cast duration_ms and rows_collected to bigint before AVG/SUM to prevent arithmetic overflow after 7 days of data - collection_health C# reader: use Convert.ToInt64 for avg_duration_ms, update model property from int to long - query_stats reader: min_dop/max_dop columns are bigint in the table but reader called GetInt16() — changed to Convert.ToInt16(GetValue()) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
Dashboard/Services/DatabaseService.cs (1)
179-179: Minor: Use0Lliteral for consistency with adjacent lines.The default value uses
0while adjacent long properties on lines 177, 178, and 180 use0L. While functionally equivalent due to implicit conversion, using0Lmaintains consistent style.Suggested fix
- AvgDurationMs = reader.IsDBNull(7) ? 0 : Convert.ToInt64(reader.GetValue(7), CultureInfo.InvariantCulture), + AvgDurationMs = reader.IsDBNull(7) ? 0L : Convert.ToInt64(reader.GetValue(7), CultureInfo.InvariantCulture),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dashboard/Services/DatabaseService.cs` at line 179, The AvgDurationMs assignment in DatabaseService (property AvgDurationMs) uses the integer literal 0 instead of the long literal 0L; update the ternary expression (reader.IsDBNull(7) ? 0 : Convert.ToInt64(reader.GetValue(7), CultureInfo.InvariantCulture)) to use 0L so it matches adjacent long defaults and maintains consistent styling across the long property assignments.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@Dashboard/Services/DatabaseService.cs`:
- Line 179: The AvgDurationMs assignment in DatabaseService (property
AvgDurationMs) uses the integer literal 0 instead of the long literal 0L; update
the ternary expression (reader.IsDBNull(7) ? 0 :
Convert.ToInt64(reader.GetValue(7), CultureInfo.InvariantCulture)) to use 0L so
it matches adjacent long defaults and maintains consistent styling across the
long property assignments.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 051872a6-d90a-46ec-b7ff-9262fd8de000
📒 Files selected for processing (4)
Dashboard/Models/CollectionHealthItem.csDashboard/Services/DatabaseService.QueryPerformance.csDashboard/Services/DatabaseService.csinstall/47_create_reporting_views.sql
DST clock-backward adjustments cause DATEDIFF(MILLISECOND, @start, SYSDATETIME()) to return negative values when the wall clock jumps back. This pollutes avg_duration_ms in report.collection_health. Clamp to 0 with IIF rather than switching to UTC, which would break consistency with existing data. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (2)
install/42_scheduled_master_collector.sql (2)
353-353: DoubleDATEDIFFevaluation can be avoided.The
IIFpattern evaluatesDATEDIFFtwice—once for the condition and once for the result. While SQL Server may optimize this in some cases, it's not guaranteed. Consider computing the duration once into a variable before the INSERT.♻️ Suggested refactor to avoid double evaluation
BEGIN CATCH SET `@total_errors` = `@total_errors` + 1; SET `@error_message` = ERROR_MESSAGE(); + + DECLARE `@error_duration_ms` integer = DATEDIFF(MILLISECOND, `@collector_start_time`, SYSDATETIME()); + SET `@error_duration_ms` = IIF(`@error_duration_ms` < 0, 0, `@error_duration_ms`); /* Log individual collector error but continue with others */ INSERT INTO config.collection_log ( collector_name, collection_status, duration_ms, error_message ) VALUES ( `@collector_name`, N'ERROR', - IIF(DATEDIFF(MILLISECOND, `@collector_start_time`, SYSDATETIME()) < 0, 0, DATEDIFF(MILLISECOND, `@collector_start_time`, SYSDATETIME())), + `@error_duration_ms`, `@error_message` );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@install/42_scheduled_master_collector.sql` at line 353, The IIF currently calls DATEDIFF(MILLISECOND, `@collector_start_time`, SYSDATETIME()) twice; compute that duration once into a local variable (e.g. `@duration_ms`) before the INSERT by assigning `@duration_ms` = IIF(DATEDIFF... < 0, 0, DATEDIFF...) or better compute raw = DATEDIFF(...) then set `@duration_ms` = CASE WHEN raw < 0 THEN 0 ELSE raw END, and then use `@duration_ms` in place of the repeated DATEDIFF call; update references to `@collector_start_time` and SYSDATETIME() accordingly so the INSERT uses the single evaluated variable.
366-366: Same double-evaluation concern applies here.This line also evaluates
DATEDIFFtwice. Consider simplifying:♻️ Suggested simplification
- SET `@collector_duration_ms` = IIF(DATEDIFF(MILLISECOND, `@collector_start_time`, SYSDATETIME()) < 0, 0, DATEDIFF(MILLISECOND, `@collector_start_time`, SYSDATETIME())); + SET `@collector_duration_ms` = DATEDIFF(MILLISECOND, `@collector_start_time`, SYSDATETIME()); + SET `@collector_duration_ms` = IIF(`@collector_duration_ms` < 0, 0, `@collector_duration_ms`);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@install/42_scheduled_master_collector.sql` at line 366, The calculation for `@collector_duration_ms` calls DATEDIFF twice; compute the difference once into a temporary variable (e.g., `@diff_ms`) using DATEDIFF(MILLISECOND, `@collector_start_time`, SYSDATETIME()) and then set `@collector_duration_ms` with IIF(`@diff_ms` < 0, 0, `@diff_ms`) so you avoid double-evaluation of DATEDIFF and make the intent clearer (references: `@collector_duration_ms`, `@collector_start_time`, DATEDIFF, SYSDATETIME, IIF).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@install/42_scheduled_master_collector.sql`:
- Line 353: The IIF currently calls DATEDIFF(MILLISECOND, `@collector_start_time`,
SYSDATETIME()) twice; compute that duration once into a local variable (e.g.
`@duration_ms`) before the INSERT by assigning `@duration_ms` = IIF(DATEDIFF... < 0,
0, DATEDIFF...) or better compute raw = DATEDIFF(...) then set `@duration_ms` =
CASE WHEN raw < 0 THEN 0 ELSE raw END, and then use `@duration_ms` in place of the
repeated DATEDIFF call; update references to `@collector_start_time` and
SYSDATETIME() accordingly so the INSERT uses the single evaluated variable.
- Line 366: The calculation for `@collector_duration_ms` calls DATEDIFF twice;
compute the difference once into a temporary variable (e.g., `@diff_ms`) using
DATEDIFF(MILLISECOND, `@collector_start_time`, SYSDATETIME()) and then set
`@collector_duration_ms` with IIF(`@diff_ms` < 0, 0, `@diff_ms`) so you avoid
double-evaluation of DATEDIFF and make the intent clearer (references:
`@collector_duration_ms`, `@collector_start_time`, DATEDIFF, SYSDATETIME, IIF).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 90232b8f-7938-482c-98ab-f7d214d95ba8
📒 Files selected for processing (1)
install/42_scheduled_master_collector.sql
- Enable horizontal scrollbar (was Disabled, clipping new columns) - Unify all column filter buttons to DatabaseSizesFilter_Click (existing columns used FinOpsFilter_Click, causing split filter state) - VLF count: filter sys.dm_db_log_info by file_id so each log file gets its own count instead of the whole-database total (Dashboard + Lite) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
install/52_collect_database_size_stats.sql (1)
298-306:⚠️ Potential issue | 🟡 MinorInner CATCH block references uninitialized
@error_message.The
@error_messagevariable is used in the RAISERROR at line 304 butERROR_MESSAGE()is never captured within this CATCH block. The variable will always be NULL here, making debug output unhelpful for diagnosing per-database collection failures.Proposed fix
BEGIN CATCH /* Log per-database errors but continue with remaining databases */ IF `@debug` = 1 BEGIN - RAISERROR(N'Error collecting size stats for database [%s]: %s', 0, 1, `@db_name`, `@error_message`) WITH NOWAIT; + DECLARE `@inner_error` nvarchar(4000) = ERROR_MESSAGE(); + RAISERROR(N'Error collecting size stats for database [%s]: %s', 0, 1, `@db_name`, `@inner_error`) WITH NOWAIT; END; END CATCH;As per coding guidelines: "T-SQL stored procedures and scripts for SQL Server. Watch for: ... missing error handling (TRY/CATCH)".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@install/52_collect_database_size_stats.sql` around lines 298 - 306, The inner CATCH uses `@error_message` in the RAISERROR but never captures ERROR_MESSAGE(); update the BEGIN CATCH block so it assigns the error info into the existing variables (e.g. set `@error_number` = ERROR_NUMBER(), set `@error_message` = ERROR_MESSAGE(), optionally set `@error_severity/` `@error_state`) before calling RAISERROR when `@debug` = 1, ensuring the RAISERROR in that CATCH prints the actual error text rather than NULL.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@install/52_collect_database_size_stats.sql`:
- Around line 298-306: The inner CATCH uses `@error_message` in the RAISERROR but
never captures ERROR_MESSAGE(); update the BEGIN CATCH block so it assigns the
error info into the existing variables (e.g. set `@error_number` = ERROR_NUMBER(),
set `@error_message` = ERROR_MESSAGE(), optionally set `@error_severity/`
`@error_state`) before calling RAISERROR when `@debug` = 1, ensuring the RAISERROR
in that CATCH prints the actual error text rather than NULL.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3b6edccf-c104-4f9b-9101-dbefbec2881c
📒 Files selected for processing (3)
Dashboard/Controls/FinOpsContent.xamlLite/Services/RemoteCollectorService.DatabaseSize.csinstall/52_collect_database_size_stats.sql
The per-database inner CATCH referenced @error_message which was never set — debug output always showed NULL. Use ERROR_MESSAGE() directly. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
Resolves the merge conflict in PR #608 from @ClaudioESSilva by rebasing onto current dev.
2.2.0-to-2.3.0/03_add_growth_vlf_columns.sqlDataGridFilterManagerfor Database Sizes column filteringThe XAML conflict was a full-file reformat in the PR vs dev's current version. Resolution: kept dev's XAML, added the two new columns manually.
Supersedes #608.
Test plan
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Bug Fixes
Chores