Skip to content

Add Prometheus metrics export endpoint for monitoring integration#80

Merged
Patrick-Ehimen merged 1 commit intoPatrick-Ehimen:mainfrom
bomanaps:feat/mcp-prometheus-metrics
Feb 21, 2026
Merged

Add Prometheus metrics export endpoint for monitoring integration#80
Patrick-Ehimen merged 1 commit intoPatrick-Ehimen:mainfrom
bomanaps:feat/mcp-prometheus-metrics

Conversation

@bomanaps
Copy link
Collaborator

@bomanaps bomanaps commented Feb 17, 2026

Pull Request

Description

#57. Expose internal metrics in Prometheus format via /metrics endpoint for integration with standard monitoring stacks. Includes authentication, cache, tool usage, storage, and process metrics with histogram support for request latencies.

Type of change

  • Bug fix
  • New feature
  • Breaking change
  • Documentation update
  • Other (describe):

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Related Issues

Screenshots (if applicable)

Summary by Sourcery

Expose a Prometheus-compatible /metrics endpoint on the MCP server’s health check HTTP server to export internal metrics for external monitoring systems.

New Features:

  • Add a Prometheus metrics exporter and /metrics HTTP endpoint on the health check server for scraping server metrics.
  • Introduce authentication, cache, tool usage, security event, storage, service pool, and process metrics with histogram support for request and auth latencies.

Enhancements:

  • Extend the authentication manager’s metrics collection to support Prometheus export, including cache hit/miss counters and security event aggregation.
  • Add configurable health check metrics enablement via the PROMETHEUS_METRICS_ENABLED environment variable, enabled by default.

Build:

  • Add the prom-client dependency to support Prometheus metrics collection and export.

Documentation:

  • Document the Prometheus /metrics endpoint, available metrics, configuration options, and example Prometheus and Grafana setups in the MCP server README.

@sourcery-ai
Copy link

sourcery-ai bot commented Feb 17, 2026

Reviewer's Guide

Adds a Prometheus-based metrics export pipeline to the MCP server by wiring a new PrometheusExporter into the existing health check HTTP server, sourcing data from the AuthManager metrics collector, tool registry, service pool, and storage stats, and documenting the new /metrics endpoint and available metrics in the README.

Sequence diagram for Prometheus /metrics export flow

sequenceDiagram
  actor Prometheus
  participant HealthCheckServer
  participant PrometheusExporter
  participant MetricsCollector
  participant ToolRegistry
  participant LighthouseServiceFactory
  participant ILighthouseService

  Prometheus->>HealthCheckServer: HTTP GET /metrics
  HealthCheckServer->>HealthCheckServer: route /metrics
  alt metrics enabled
    HealthCheckServer->>PrometheusExporter: getMetrics()
    PrometheusExporter->>PrometheusExporter: updateMetrics()

    PrometheusExporter->>MetricsCollector: getMetrics()
    MetricsCollector-->>PrometheusExporter: auth metrics
    PrometheusExporter->>MetricsCollector: getCacheCounters()
    MetricsCollector-->>PrometheusExporter: cache counters

    PrometheusExporter->>MetricsCollector: getSecurityEvents()
    MetricsCollector-->>PrometheusExporter: security events

    PrometheusExporter->>ToolRegistry: getMetrics()
    ToolRegistry-->>PrometheusExporter: registry metrics
    loop per tool
      PrometheusExporter->>ToolRegistry: getToolStats(toolName)
      ToolRegistry-->>PrometheusExporter: tool stats
    end

    PrometheusExporter->>LighthouseServiceFactory: getStats()
    LighthouseServiceFactory-->>PrometheusExporter: service pool stats

    PrometheusExporter->>ILighthouseService: getStorageStats()
    ILighthouseService-->>PrometheusExporter: storage stats

    PrometheusExporter-->>HealthCheckServer: prometheusText = registry.metrics()
    HealthCheckServer->>PrometheusExporter: getContentType()
    PrometheusExporter-->>HealthCheckServer: contentType
    HealthCheckServer-->>Prometheus: 200 OK, text/plain, version=0.0.4
  else metrics disabled
    HealthCheckServer-->>Prometheus: 404 Metrics endpoint not enabled
  end
Loading

Updated class diagram for Prometheus metrics export pipeline

classDiagram
  class HealthCheckServer {
    - HealthCheckDependencies deps
    - HealthCheckConfig healthConfig
    - Logger logger
    - PrometheusExporter prometheusExporter
    - lastConnectivityCheck : up boolean, lastChecked number
    + constructor(healthConfig: HealthCheckConfig, deps: HealthCheckDependencies)
    + start() Promise~void~
    - handleRequest(req: http.IncomingMessage, res: http.ServerResponse) void
    - handleHealth(res: http.ServerResponse) Promise~void~
    - handleReady(res: http.ServerResponse) Promise~void~
    - handleMetrics(res: http.ServerResponse) Promise~void~
    - sendJSON(res: http.ServerResponse, statusCode: number, body: unknown) void
    - checkSDK() ReadinessCheck
  }

  class PrometheusExporter {
    - PrometheusExporterDependencies deps
    - client.Registry registry
    - client.Counter authTotal
    - client.Counter cacheHitsTotal
    - client.Counter cacheMissesTotal
    - client.Counter securityEventsTotal
    - client.Counter toolCallsTotal
    - client.Gauge cacheSize
    - client.Gauge cacheMaxSize
    - client.Gauge servicePoolSize
    - client.Gauge servicePoolMaxSize
    - client.Gauge storageFiles
    - client.Gauge storageBytes
    - client.Gauge storageMaxBytes
    - client.Gauge storageUtilization
    - client.Gauge uniqueApiKeys
    - client.Gauge toolsRegistered
    - client.Histogram requestDuration
    - client.Histogram authDuration
    - lastCacheCounters : hits number, misses number
    - lastAuthMetrics : authenticatedRequests number, failedAuthentications number, fallbackRequests number
    - lastSecurityEventCounts : Map~string, number~
    - lastToolCallCounts : Map~string, number~
    + constructor(deps: PrometheusExporterDependencies)
    - initializeMetrics() void
    - updateMetrics() void
    - updateAuthMetrics() void
    - updateCacheMetrics() void
    - updateSecurityMetrics() void
    - updateToolMetrics() void
    - updateServicePoolMetrics() void
    - updateStorageMetrics() void
    + getMetrics() Promise~string~
    + getContentType() string
    + reset() void
  }

  class PrometheusExporterDependencies {
    + MetricsCollector metricsCollector
    + ToolRegistry registry
    + LighthouseServiceFactory serviceFactory
    + ILighthouseService lighthouseService
  }

  class AuthManager {
    - AuthConfig config
    - KeyValidationCache cache
    - RateLimiter rateLimiter
    - MetricsCollector metricsCollector
    + constructor(config: AuthConfig)
    + authenticateRequest(req: IncomingMessage) Promise~AuthenticationResult~
    + getMetricsCollector() MetricsCollector
    + getCacheStats() CacheStats
    + getRateLimiterStatus(keyHash: string) RateLimiterStatus
    + destroy() void
  }

  class MetricsCollector {
    - cacheHits : number
    - cacheMisses : number
    + recordCacheAccess(hit: boolean) void
    + recordAuthentication(result: AuthenticationResult) void
    + getMetrics() AuthMetrics
    + getCacheCounters() hits number, misses number
    + getSecurityEvents() SecurityEvent[]
    + destroy() void
  }

  class ToolRegistry {
    + getMetrics() ToolRegistryMetrics
    + getToolStats(toolName: string) ToolStats
  }

  class LighthouseServiceFactory {
    + getStats() ServicePoolStats
  }

  class ILighthouseService {
    <<interface>>
    + getStorageStats() StorageStats
  }

  class HealthCheckConfig {
    + port : number
    + enabled : boolean
    + lighthouseApiUrl : string
    + connectivityCheckInterval : number
    + connectivityTimeout : number
    + metricsEnabled : boolean
  }

  class HealthCheckDependencies {
    + AuthManager authManager
    + ToolRegistry registry
    + LighthouseServiceFactory serviceFactory
    + ILighthouseService lighthouseService
    + Logger logger
  }

  HealthCheckServer --> HealthCheckConfig
  HealthCheckServer --> HealthCheckDependencies
  HealthCheckServer --> PrometheusExporter

  HealthCheckDependencies --> AuthManager
  HealthCheckDependencies --> ToolRegistry
  HealthCheckDependencies --> LighthouseServiceFactory
  HealthCheckDependencies --> ILighthouseService

  PrometheusExporter --> PrometheusExporterDependencies
  PrometheusExporterDependencies --> MetricsCollector
  PrometheusExporterDependencies --> ToolRegistry
  PrometheusExporterDependencies --> LighthouseServiceFactory
  PrometheusExporterDependencies --> ILighthouseService

  AuthManager --> MetricsCollector
  AuthManager --> KeyValidationCache
  AuthManager --> RateLimiter

  MetricsCollector --> AuthenticationResult
  MetricsCollector --> SecurityEvent

  ToolRegistry --> ToolRegistryMetrics
  ToolRegistry --> ToolStats

  LighthouseServiceFactory --> ServicePoolStats
  ILighthouseService --> StorageStats
Loading

File-Level Changes

Change Details Files
Expose a /metrics HTTP endpoint on the health check server that returns Prometheus-formatted metrics.
  • Update HealthCheckServer to describe /metrics in the class header comment
  • Inject a PrometheusExporter instance when metrics are enabled in the health check config
  • Add routing logic for /metrics to generate and return metrics with appropriate content type and cache headers
  • Return 404 when metrics are disabled and 500 on generation errors
apps/mcp-server/src/health/HealthCheckServer.ts
Extend AuthManager to collect and expose authentication, cache, and security metrics for Prometheus export.
  • Instantiate a MetricsCollector within AuthManager and store it as a private field
  • Record cache hits and misses around cache lookups
  • Record authentication outcomes and timings for both success and failure paths
  • Expose the MetricsCollector via a getter for external consumers such as PrometheusExporter
  • Ensure MetricsCollector is destroyed when AuthManager is destroyed
  • Add a method to MetricsCollector to expose raw cache hit/miss counters used by the exporter
apps/mcp-server/src/auth/AuthManager.ts
apps/mcp-server/src/auth/MetricsCollector.ts
Introduce a PrometheusExporter that builds a dedicated prom-client registry and maps internal metrics into Prometheus counters, gauges, and histograms.
  • Create PrometheusExporter with dependencies on MetricsCollector, ToolRegistry, LighthouseServiceFactory, and ILighthouseService
  • Initialize custom counters, gauges, and histograms for auth, cache, tools, security events, storage, service pool, and process-related metrics using prom-client
  • Implement update routines that translate internal metrics into Prometheus metrics, carefully handling deltas for counters to keep them monotonic
  • Expose methods to render metrics in Prometheus text format and to return the correct content type
  • Add a reset helper for tests and export PrometheusExporter types from the health index
apps/mcp-server/src/health/PrometheusExporter.ts
apps/mcp-server/src/health/index.ts
Make the metrics endpoint configurable via server configuration and document its usage and available metrics.
  • Extend HealthCheckConfig with an optional metricsEnabled flag
  • Default metricsEnabled to true unless PROMETHEUS_METRICS_ENABLED is explicitly set to "false" in the environment
  • Document the /metrics endpoint, configuration environment variables, and available metric families and examples in the MCP server README, including Prometheus and Grafana integration instructions
apps/mcp-server/src/health/types.ts
apps/mcp-server/src/config/server-config.ts
apps/mcp-server/README.md
Add prom-client as a runtime dependency to support Prometheus metric collection.
  • Declare prom-client in the MCP server package.json dependencies
  • Update the lockfile to reflect the new dependency
apps/mcp-server/package.json
pnpm-lock.yaml

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues, and left some high level feedback:

  • In PrometheusExporter.updateCacheMetrics, you're only exporting hit/miss counters and leaving a TODO-style comment about cache size/max size — consider wiring this up to AuthManager.getCacheStats() (or similar) so lighthouse_cache_size and lighthouse_cache_max_size reflect real values instead of being omitted.
  • For authDuration and requestDuration histograms you're currently observing averages (per scrape / per tool) rather than individual request latencies, which can produce misleading distributions; if possible, move the histogram instrumentation closer to the actual auth and tool execution paths to record per-request observations.
  • The metricsEnabled flag is parsed as process.env.PROMETHEUS_METRICS_ENABLED !== "false", which means any non-"false" value (including typos) enables metrics; consider using a stricter boolean parser (e.g. only "true" enables) to avoid surprising configuration behavior.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `PrometheusExporter.updateCacheMetrics`, you're only exporting hit/miss counters and leaving a TODO-style comment about cache size/max size — consider wiring this up to `AuthManager.getCacheStats()` (or similar) so `lighthouse_cache_size` and `lighthouse_cache_max_size` reflect real values instead of being omitted.
- For `authDuration` and `requestDuration` histograms you're currently observing averages (per scrape / per tool) rather than individual request latencies, which can produce misleading distributions; if possible, move the histogram instrumentation closer to the actual auth and tool execution paths to record per-request observations.
- The `metricsEnabled` flag is parsed as `process.env.PROMETHEUS_METRICS_ENABLED !== "false"`, which means any non-`"false"` value (including typos) enables metrics; consider using a stricter boolean parser (e.g. only `"true"` enables) to avoid surprising configuration behavior.

## Individual Comments

### Comment 1
<location> `apps/mcp-server/src/health/PrometheusExporter.ts:267-276` </location>
<code_context>
+    // For now, we derive from the metrics collector's data
+  }
+
+  private updateSecurityMetrics(): void {
+    const events = this.deps.metricsCollector.getSecurityEvents();
+
+    // Count events by type
+    const eventCounts: Map<string, number> = new Map();
+    for (const eventType of Object.values(SecurityEventType)) {
+      eventCounts.set(eventType, 0);
+    }
+
+    for (const event of events) {
+      const current = eventCounts.get(event.type) || 0;
+      eventCounts.set(event.type, current + 1);
+    }
+
+    // Calculate deltas and increment counters
+    for (const [type, count] of eventCounts.entries()) {
+      const lastCount = this.lastSecurityEventCounts.get(type) || 0;
+      const delta = count - lastCount;
+      if (delta > 0) {
+        this.securityEventsTotal.labels(type).inc(delta);
+      }
+      this.lastSecurityEventCounts.set(type, count);
+    }
+  }
</code_context>

<issue_to_address>
**issue (bug_risk):** Security event counters may stall or undercount if `getSecurityEvents()` uses a sliding time window.

This logic assumes `getSecurityEvents()` is cumulative. If it’s backed by a sliding window or otherwise bounded, counts can decrease as older events expire, making `delta` negative and ignored so `lighthouse_security_events_total` stops reflecting the true total. To keep this counter monotonic, either expose cumulative counts from `MetricsCollector` or increment the counter at event time instead of deriving it from a windowed collection on scrape.
</issue_to_address>

### Comment 2
<location> `apps/mcp-server/src/health/PrometheusExporter.ts:246-255` </location>
<code_context>
+    }
+  }
+
+  private updateCacheMetrics(): void {
+    const cacheCounters = this.deps.metricsCollector.getCacheCounters();
+
+    // Calculate deltas
+    const hitsDelta = cacheCounters.hits - this.lastCacheCounters.hits;
+    const missesDelta = cacheCounters.misses - this.lastCacheCounters.misses;
+
+    if (hitsDelta > 0) {
+      this.cacheHitsTotal.inc(hitsDelta);
+    }
+    if (missesDelta > 0) {
+      this.cacheMissesTotal.inc(missesDelta);
+    }
+
+    // Update last known values
+    this.lastCacheCounters = { ...cacheCounters };
+
+    // Note: Cache size/maxSize would need to come from AuthManager.getCacheStats()
+    // For now, we derive from the metrics collector's data
+  }
</code_context>

<issue_to_address>
**issue (bug_risk):** Cache size and max-size gauges are defined but never updated.

`cacheSize` and `cacheMaxSize` are registered in `initializeMetrics` but never set here, so Prometheus will always see zero/stale values. If cache size/max size are available (e.g. via `AuthManager.getCacheStats()`), update the gauges in this method; otherwise consider removing these metrics to avoid exporting misleading data.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +267 to +276
private updateSecurityMetrics(): void {
const events = this.deps.metricsCollector.getSecurityEvents();

// Count events by type
const eventCounts: Map<string, number> = new Map();
for (const eventType of Object.values(SecurityEventType)) {
eventCounts.set(eventType, 0);
}

for (const event of events) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Security event counters may stall or undercount if getSecurityEvents() uses a sliding time window.

This logic assumes getSecurityEvents() is cumulative. If it’s backed by a sliding window or otherwise bounded, counts can decrease as older events expire, making delta negative and ignored so lighthouse_security_events_total stops reflecting the true total. To keep this counter monotonic, either expose cumulative counts from MetricsCollector or increment the counter at event time instead of deriving it from a windowed collection on scrape.

Comment on lines +246 to +255
private updateCacheMetrics(): void {
const cacheCounters = this.deps.metricsCollector.getCacheCounters();

// Calculate deltas
const hitsDelta = cacheCounters.hits - this.lastCacheCounters.hits;
const missesDelta = cacheCounters.misses - this.lastCacheCounters.misses;

if (hitsDelta > 0) {
this.cacheHitsTotal.inc(hitsDelta);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Cache size and max-size gauges are defined but never updated.

cacheSize and cacheMaxSize are registered in initializeMetrics but never set here, so Prometheus will always see zero/stale values. If cache size/max size are available (e.g. via AuthManager.getCacheStats()), update the gauges in this method; otherwise consider removing these metrics to avoid exporting misleading data.

@Patrick-Ehimen Patrick-Ehimen merged commit cd6ef2a into Patrick-Ehimen:main Feb 21, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants