Releases: erikdarlingdata/PerformanceMonitor
v2.3.0
See CHANGELOG.md for full release notes.
Highlights
- ErikAI analysis engine — rule-based inference for Lite and Dashboard with anomaly detection, bad actor scoring, and CPU spike detection
- FinOps cost optimization — Phase 1-4 recommendations (enterprise feature audit, CPU/memory right-sizing, compression savings, VM right-sizing, reserved capacity)
- Azure SQL DB fixes — all FinOps collectors now query each database individually
- Server unreachable email alerts — tray + email on offline/online transitions
- Lite data import from previous install for seamless upgrades
- SQL Server version check in both installers (rejects 2014 and earlier)
- 35 installer adversarial tests covering upgrade failures, data survival, idempotency
- Full MCP tool coverage — Dashboard 57 tools, Lite 51 tools
- Growth rate and VLF count columns in Database Sizes
- Column filters on all FinOps DataGrids
- Per-server Utility Database setting for community procs
- Numerous bug fixes including the #538 installer data-loss fix
Nightly Build (2.3.0-nightly.20260319)
Automated nightly build from dev branch.
Version: 2.3.0-nightly.20260319
Commit: eb3e4c2
Built: 2026-03-19 06:35 UTC
These builds include the latest changes and may be unstable.
For production use, download the latest stable release.
v2.2.0
Performance Monitor v2.2.0 — Release Notes
Release date: 2026-03-11
Previous release: v2.1.0 (2026-03-04)
LOB Compression — 19× Storage Reduction
The three largest collection tables in Dashboard — query_stats, query_store_data, and procedure_stats — store query text and execution plan XML as nvarchar(max). On a busy instance, these tables dominate the database footprint. A production server collecting from 4 instances for two weeks had 597 MB of raw text across these tables.
v2.2.0 migrates these columns to varbinary(max) with SQL Server's built-in COMPRESS() function (gzip). The upgrade performs an atomic table swap per table: create new schema → migrate data with compression → rename. A row_hash column (binary(32), computed via HASHBYTES('SHA2_256', ...)) is added to each table for deduplication, preventing the same query from being stored repeatedly across collection cycles.
Measured Results
| Table | Before | After | Ratio |
|---|---|---|---|
| query_stats | 339 MB | 18.0 MB | 18.8× |
| query_store_data | 258 MB | 13.5 MB | 19.1× |
| procedure_stats | ~comparable | ~comparable | ~19× |
| Total | 597 MB | ~31.5 MB | ~19× |
Query text and plan XML compress extremely well because they're highly repetitive — the same SELECT statement structure appears thousands of times with slight parameter variations. The COMPRESS() approach works on Standard Edition (unlike page/row compression which was Enterprise-only before 2016 SP1), requires no application changes beyond wrapping reads in DECOMPRESS(), and is completely transparent to the Dashboard UI.
Upgrade Experience
The migration runs automatically via the CLI or GUI installer. Each table swap takes a few minutes depending on table size. The upgrade scripts are fully idempotent — if the columns are already varbinary, the script prints a skip message and exits. No downtime required; collectors continue running during migration since the swap is atomic.
Deduplication Payoff
The row_hash column eliminates a long-standing data growth problem. Previously, every collection cycle could re-insert the same query text if it was still in the plan cache. With hash-based deduplication, a query that hasn't changed between cycles is skipped. Combined with compression, the steady-state database size for a typical 4-server deployment dropped from multiple gigabytes to under 200 MB.
FinOps Monitoring Tab
v2.2.0 adds a dedicated FinOps tab to both Dashboard and Lite with eight panels for cost optimization and resource utilization analysis. This isn't a single metric — it's a full operational efficiency toolkit built on data the collectors already gather.
Utilization & Provisioning
The Utilization panel aggregates CPU and memory metrics into a provisioning classification:
| Status | Criteria |
|---|---|
| Over-provisioned | Avg CPU < 15% AND Max CPU < 40% AND memory ratio < 0.5 |
| Under-provisioned | P95 CPU > 85% OR memory ratio > 0.95 |
| Right-sized | Everything else |
Memory ratio is target memory used / target memory allocated — a server using 12 GB of a 64 GB target is at 0.19, clearly over-provisioned on memory.
A 7-day provisioning trend shows daily classification history, so you can distinguish "consistently over-provisioned" from "spiked once on Tuesday."
Database Resources
Per-database workload breakdown: CPU time, logical/physical reads and writes, execution count, I/O stall time. Each database gets a CPU share and I/O share percentage. This answers "which database is actually consuming this server's resources" — essential for chargeback conversations and consolidation planning.
Storage Growth
7-day and 30-day growth analysis per database. Shows absolute growth in MB, daily growth rate, and growth percentage. A database growing 500 MB/day for 30 days is a different conversation than one that spiked once. Both show up in file size views, but only the trend tells the story.
Idle Database Detection
Databases with zero executions in the configured lookback window. Includes database size so you can prioritize — a 50 GB idle database is worth investigating, a 10 MB one isn't. Shows last known execution time when available.
Index Analysis with sp_IndexCleanup
Integration with the DarlingData sp_IndexCleanup stored procedure. Checks if the proc exists on the target server, then executes it to produce:
- Summary view: Per-database rollup of total indexes, removable count, mergeable count, compressible count, space savings in GB, and percentage reduction.
- Detail view: Per-index recommendations with generated DROP/MERGE scripts, consolidation rules, size before/after, and usage statistics (reads, writes, lock waits, latch waits).
The summary answers "how much can I save?" The detail provides the runnable scripts to actually do it.
Application Connections
Connected application breakdown with average and max connection counts. Useful for identifying connection pool leaks or applications holding excessive connections.
Optimization Panel
Four sub-sections:
- Wait Categories: Categorized wait stats with percentages and top wait type per category
- Expensive Queries: Top queries by CPU with full query text
- TempDB Pressure: User object reserved/peak, internal object usage, and warnings
- Memory Grant Efficiency: Daily breakdown of granted vs used memory, efficiency percentage, peak grants, and timeout/forced grant counts
DuckDB 1.5.0 + Parquet Compaction (Lite)
Lite's local DuckDB database had two compounding performance problems. The archive system produced one parquet file per table per archive cycle — with 25 tables and the archive-and-reset cycle firing every ~75 minutes, this generated 525 new files daily. After two weeks, the archive directory contained 2,611 parquet files. Every view query paid per-file metadata overhead on a glob scan.
DuckDB 1.4.4's write path made this worse. INSERT performance degraded linearly with table size (33× slower at 667 MB vs empty), and deleted blocks couldn't be reused, forcing the frequent archive-and-reset cycles that produced all those files.
DuckDB 1.5.0
Updated from 1.4.4 to 1.5.0 (released 2026-03-10):
- Non-blocking checkpointing — WAL-to-disk checkpoints no longer stall concurrent readers. Eliminated intermittent "Reached the end of the file" errors and read latency spikes during checkpoint operations.
- Free block reuse — deleted blocks are marked free and reused for new INSERTs. The database file stabilizes at a steady-state size instead of growing monotonically.
- Storage format v67 → v68 — upgrades transparently on first open. Parquet archives unaffected.
Under identical workload (5 SQL Servers, periodic TPC-C load):
- 1.4.4: Database grew to 512 MB every ~75 minutes, triggering 21 archive-and-reset cycles per day
- 1.5.0: Database stabilized at ~424 MB with zero resets in 6+ hours of monitoring
Automatic Parquet Compaction
New CompactParquetFiles() method runs after every archive cycle. Groups all parquet files by (month, table), merges each group into a single YYYYMM_tablename.parquet file using an in-memory DuckDB connection (no contention with the main database). Handles schema evolution across archive generations via union_by_name=true. Strips dead columns during compaction — the query_plan_text column in query_store_stats was NULL in new data but old archives still contained ~20 KB of plan XML per row.
Steady-state file count: ~75 files (25 tables × 3 months). Down from 2,611.
Retention switched from 90-day file-age deletion to 3-month calendar-month deletion, aligned with the monthly compacted filename format.
Performance Results
| Metric | Before | After |
|---|---|---|
v_wait_stats query (EXPLAIN ANALYZE) |
1,700ms | 27ms |
| Parquet files scanned per glob | 233 | 19 |
Lite RefreshAllDataAsync |
6-13s (pre-opt) / 1.7-4.3s (post #510) | < 500ms |
| Slow query alerts (6+ hours) | Multiple | Zero |
| Archive-and-reset frequency | 21/day | ~0 |
| Parquet files generated per day | ~525 | ~25 |
The method profiler has a 500ms threshold. After 75 minutes of runtime, Lite stopped logging RefreshAllDataAsync entirely — every refresh cycle completing in under half a second.
Compound Effect
No single change achieved sub-500ms:
- Visible-tab-only refresh (#510): 6-13s → 1.7-4.3s
- Parquet compaction: 1.7-4.3s → 1.0-1.5s
- DuckDB 1.5.0 non-blocking checkpoint: eliminated latency spikes
- DuckDB 1.5.0 free block reuse: eliminated the archive churn that generated files
- Together: 6-13s → < 500ms (13-26× improvement)
Upgrade Path
No user action required. DuckDB storage format upgrades on first launch. First archive cycle compacts existing files (may take 30-60 seconds for large archive directories). Subsequent cycles are near-instant.
SignPath Code Signing
Starting with v2.2.0, all release binaries (Dashboard, Lite, and Installers) are digitally signed via SignPath, an OSS-friendly code signing service. This means:
- Windows SmartScreen no longer flags the executables as unrecognized
- Corporate environments with signature-required policies can deploy without exception requests
- The installer MSI chain of trust is verifiable end-to-end
SignPath was approved under their Free Open Source Software program. Signing is integrated into the GitHub Actions release workflow — when a version tag is pushed, the build artifacts are submitted to SignPath for signing before the GitHub release is published.
Other Notable Changes
ReadOnlyIntent Connection Option (Lite)
Lite connections can now set ApplicationIntent=ReadOnly, which enables read routing on Always On Availability Group...
v2.1.0
What's New
Light Themes & Visual Polish
- Light theme and "Cool Breeze" theme with live preview in settings — @ClaudioESSilva (#347)
- Warning status icon in server health indicators — @ClaudioESSilva (#355)
- Time display mode toggle — show timestamps in Server Time, Local Time, or UTC (#17)
- Add Server dialog visual parity between Dashboard and Lite — @ClaudioESSilva (#399)
Plan Viewer Enhancements
- Standalone Plan Viewer — open, paste, or drag & drop .sqlplan files with tabbed multi-plan support — @ClaudioESSilva (#397, #398)
- 30 PlanAnalyzer rules — expanded from 12 to 30 covering implicit conversions, lazy spools, exchange spills, and more
- Wait stats banner showing top waits for the query
- UDF runtime details — CPU and elapsed time in Runtime Summary
- Sortable statement grid and canvas panning
New Features
- Comma-separated column filters — enter multiple filter values
- Optional query text and plan collection — per-collector flags in collection schedule
--preserve-jobsinstaller flag — keep existing SQL Agent job schedules during upgrade- Copy Query Text context menu on Dashboard statements grid
- Server list sorting by display name — @HannahVernon (#330)
Bug Fixes
- OverflowException on wait stats page with large decimal values
- SQL dumps on mirroring passive servers with RESTORING databases
- UI hang when adding first server to Dashboard
- UTC/local timezone mismatch in blocked process XML processor
- AG secondary filter skipping all inaccessible databases
- DuckDB column aliases in long-running queries — @HannahVernon (#391)
- sp_server_diagnostics and WAITFOR excluded from long-running query alerts — @HannahVernon (#362)
- Installer validation reporting historical collection errors as current failures
- query_snapshots schema mismatch after sp_WhoIsActive upgrade — auto-recreates daily table
- Missing upgrade script for default_trace_events columns on 2.0.0→2.1.0 upgrade
See CHANGELOG.md for the full list.
Contributors
Thank you to the contributors who helped make this release possible!
- @ClaudioESSilva — Light themes, Cool Breeze theme, standalone plan viewer with multi-file support, warning status icon, Add Server dialog parity
- @HannahVernon — Server list sorting, DuckDB alias fixes, long-running query alert exclusions
Installation
Download the appropriate zip for your use case:
- Dashboard — connects to SQL Server
PerformanceMonitordatabase (data collected by SQL Agent jobs) - Lite — standalone collector + viewer, no SQL Agent required
- Installer — CLI tool to deploy/upgrade the PerformanceMonitor database and SQL Agent jobs
Run the installer first to set up the database, then use Dashboard or Lite to view the data.
v2.0.0 - The SQL
This release was a Stupid© amount of work.
First, thank you to everyone who has submitted issues, questions, feedback, and contributed code.
You're all the best, and if I could hug you all the way I hug a glass of above room temperature Scotch, I would.
After the 1.3 release, I decided to take a new approach to development: Look at everything like I hate it.
Did it work? Well, you be the judge.
I added a graphical plan viewer
Microsoft released the mssql VS Code extension under the MIT license, which conveniently has all of the assets and XML I needed to build it out.
When you're in any of the query-related tabs, you can do two things with a right click:
- View the cached/estimated plan for a query
- Execute the query (exclusions apply) to get the actual execution plan
The "Copy Repro Script" feature was nice, but you had to go run it in SSMS. There's nothing wrong with that, but it's an extra step.
I hate extra steps, whether there's one or twelve of them.
Plus, when I'm in charge, I get to add my own plan analysis rules to it. I was able to get a good bunch in this round, but I'll be adding in more later.
You'll see all the normal warnings you're used to, plus ones I like to warn people about: Filters, Eager Index Spools, Skewed Parallelism, and more.
I added Perfmon Helpers
Remember that time you were looking at a specific problem, and you remembered all the relevant Perfmon counters you'd want to get details about it?
Yeah, me neither.
So now, when you go into the Perfmon counters view, there's a drop down that auto-selects relevant counters for specific problems.
Half the reason people don't use these things is because they're indecipherable. I tried to make them cipherable for you.
I added interactive buttons
The current queries tabs are still queries run on collector schedules.
If you want to see what's running right now, you can press a button to do that.
Again, reduce friction.
Give you everything you need in one place.
Make you happy.
I care about you.
Big Monitoring©️ doesn't care about you.
I added a bunch of new graphs
What would life be without more data? Empty. Meaningless. Like this former glass of Scotch.
- Memory clerks tab in Lite
- Current Waits charts in both (from waiting tasks)
- File I/O throughput, because latency is half the battle
- CPU scheduler pressure, for those Extra days
And more! You'll see.
I made DuckDB behave
It turns out that DuckDB is fantastic at running crazy fast analytical queries over its own tables, Parquet tables, and whatever other files you got.
But a steady stream of small inserts gives it upset stomach, indigestion, etc.
The database gets huge, fast, and starts slowing down a bunch.
Now, every time DuckDB hits 512MB, it archives all its data to Parquet files and reinitializes itself.
This keeps the inserts flowing, the queries going, and the weird CPU spikes... not spiking.
The Parquet files are much better compressed, and just as quick to query.
I dunno how they do it, but it's pretty cool.
But wait, there's more!
Check out the link right below to see everything that got done.
If you have questions, check out the Read Me.
I didn't name it that because you shouldn't read it.
Seriously. I get about a dozen questions a day that are answered in there.
Speaking of which, I need to go update that.
See the CHANGELOG for full details
v1.3.0
v1.2.0 — Happy One Week Birthday!
Happy one week birthday to Performance Monitor!
What's New
Features
- Collection Health tab in Lite UI — see collector status at a glance (#39)
- Collector performance diagnostics in Lite UI (#40)
- Locale-aware date/time formatting throughout both apps (#41)
- Average ms per wait chart toggle in both Dashboard and Lite (#22)
- Alerts overhaul — alert types, history view, column filtering, dismiss/hide (#52, #56-59)
- Chart hover tooltips on all Dashboard charts (#70) and CPU/Memory/TempDB in Lite (#82)
- FocusServerTabOnClick user setting for server tab navigation (#79)
- Reset collection schedule flag for installer re-installs (#92)
Bug Fixes
- XE ring buffer query timeouts on large buffers (#37)
- query_cost overflow in memory grant collector (#47)
- Chart X-axis double-converting custom range to server time (#49)
- Alert persistence across restarts, time-range ack bug (#44)
- Minimize-to-tray setting ignored in Dashboard, added to Lite (#53)
- Duplicate system events in Lite (#61)
- Dashboard sub-tab badges and DuckDB migration (#68)
- procedure_stats_collector truncation on DDL triggers (#69)
- Deadlock charts not populating (#73)
- DataGrid row height text clipping (#78)
- Chart zero lines, collection log server name, overview online status (#85, #91, #93)
- Skip offline servers during collection, reduced connection timeout (#90)
Infrastructure
- CI pipelines for build validation, SQL install testing, and DuckDB schema tests (#75)
- CI now runs on PRs to both main and dev (#103)
- Chain-trigger blocking/deadlock parsers, XML collection optimization, schedule tuning (#86)
- Replaced all AddWithValue with explicit SqlParameter types (#88)
- NuGet packages updated to latest stable versions (#102) — thanks @MisterZeus!
Downloads
Dashboard, Lite, and Installer ZIPs with SHA256 checksums are attached below.
v1.1.0
What's New
Features
- Hover tooltips on all multi-series charts (#21) — Hover over any line on Wait Stats, Sessions, Latch Stats, Spinlock Stats, File I/O, Perfmon, and TempDB charts to see the series name, value, and timestamp. Works in both Dashboard and Lite. Reported by @BrentOzar
- Microsoft Entra MFA authentication (#20) — Lite now supports Microsoft Entra MFA for connecting to Azure SQL Database. Reported by @Nyck40, implemented by @ClaudioESSilva
- Column-level filtering on all Lite DataGrids (#18) — Filter any column across all 11 grids in Lite
- Chart visual parity (#16) — Material Design 300 color palette, data point markers, and consistent grid styling across all charts
- Smart Select All + 20 wait types (#12) — Wait stats picker now supports up to 20 wait types with intelligent default selection (poison waits + usual suspects + top by time). Reported by @BrentOzar
- Trend chart legends (#11) — Dashboard Performance Trends charts always show legends
- Per-server collector health (#5) — Lite status bar shows collector health per server
- Exclude WAITFOR queries (#4) — WAITFOR queries no longer appear in top queries views. Reported by @sandimschuh
- Server status in overview (#2) — Lite overview shows Online/Offline status for each server
- Check for updates (#1) — Both apps can check for new releases on GitHub
- High DPI support — Both Dashboard and Lite render correctly on high-DPI displays. Implemented by @ClaudioESSilva
Bug Fixes
- Query text off-by-one truncation (#25) — Query text no longer loses its last character. Reported by @BrentOzar
- Blocking/deadlock XML processors (#23) — Fixed processors truncating parsed data on every collection run
- Wait type Clear All — Fixed search filter not refreshing when clearing all wait type selections
Upgrade Notes
- Lite users with Entra MFA: The
servers.jsonformat now includes anAuthenticationTypefield. Existing configs are migrated automatically — no action needed.
Contributors
Thank you to everyone who reported issues and contributed code:
- @BrentOzar — issues #12, #21, #25
- @ClaudioESSilva — High DPI support, Microsoft Entra MFA authentication
- @Nyck40 — issue #20
- @sandimschuh — issue #4
- @asuhagia — issue #17
Checksums (SHA-256)
8c088141b08dd739d16c017bbabb89633c34c97d9322a15287a2e901a15ed9f5 PerformanceMonitorDashboard-1.1.0.zip
f962218773953564698d3d280c75838ba41a6660793e97eac3c340af7a54e453 PerformanceMonitorInstaller-1.1.0.zip
ea049023e98cf20fda1dd6324cb78c24580e93bc4e041ac203344c8f8f7844a3 PerformanceMonitorLite-1.1.0.zip
v1.0.0 — Initial Release
SQL Server Performance Monitor v1.0.0
Free, open-source SQL Server performance monitoring in two editions.
Downloads
| File | Description |
|---|---|
| PerformanceMonitorInstaller-1.0.0.zip | Full Edition installer (CLI + GUI) with SQL scripts. Run on each server you want to monitor. |
| PerformanceMonitorDashboard-1.0.0.zip | Full Edition dashboard app. Connects to servers where the installer has been run. |
| PerformanceMonitorLite-1.0.0.zip | Lite Edition — standalone app, nothing installed on the server. Supports Azure SQL DB. |
Full Edition Quick Start
- Extract the Installer zip
- Run
PerformanceMonitorInstaller.exe YourServerName(or use the GUI installer) - Extract the Dashboard zip and launch to view your data
Lite Edition Quick Start
- Extract the Lite zip
- Run
PerformanceMonitorLite.exe - Click Add Server and connect
Supported Platforms
- SQL Server 2016, 2017, 2019, 2022, 2025
- Azure SQL Managed Instance
- AWS RDS for SQL Server
- Azure SQL Database (Lite only)
Requirements
- Windows 10/11 with .NET 8.0 Desktop Runtime
- Full Edition: sysadmin on target, SQL Server Agent running
- Lite Edition: VIEW SERVER STATE on target