feat: add local HTTP API for usage data#319
Conversation
Expose cached plugin probe results via a read-only HTTP server on 127.0.0.1:6736 so other local apps can consume OpenUsage data. Routes: GET /v1/usage (enabled providers), GET /v1/usage/:id (single provider), OPTIONS for CORS. Only successful cached snapshots are served; failed probes never overwrite prior cache. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
🤖 Augment PR SummarySummary: Adds a local, read-only HTTP server on Changes:
Technical Notes: API returns 200/204/404/405 as appropriate, includes permissive CORS headers, and disables itself for the session if the port can’t be bound. 🤖 Was this summary useful? React with 👍 or 👎 |
| @@ -0,0 +1,536 @@ | |||
| use crate::plugin_engine::runtime::{MetricLine, PluginOutput}; | |||
There was a problem hiding this comment.
This module is ~537 LOC; repo guardrail prefers keeping files <~400 LOC (split/refactor as needed) (Rule: AGENTS.md). Consider splitting cache/settings vs HTTP handling to keep changes easier to review and test.
Severity: low
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
src-tauri/src/local_http_api.rs
Outdated
| let (settings_order, disabled) = read_plugin_settings(&state.app_data_dir); | ||
|
|
||
| // If settings are present, use them; otherwise apply defaults. | ||
| let has_settings = !settings_order.is_empty(); |
There was a problem hiding this comment.
has_settings is based only on settings_order, so a settings file with disabled populated but empty order would fall back to DEFAULT_ENABLED_PLUGINS and ignore the user’s disabled list. That seems likely to return providers the user explicitly disabled.
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
src-tauri/src/local_http_api.rs
Outdated
| } | ||
| }; | ||
|
|
||
| for stream in listener.incoming() { |
There was a problem hiding this comment.
The accept loop calls handle_connection(stream) inline, so one client that connects and then stalls (blocking read) can prevent subsequent requests from being served. This can make the local API unreliable under slow/misbehaving clients.
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
src-tauri/src/local_http_api.rs
Outdated
| // --------------------------------------------------------------------------- | ||
|
|
||
| const CORS_HEADERS: &str = "\ | ||
| Access-Control-Allow-Origin: *\r\n\ |
There was a problem hiding this comment.
With Access-Control-Allow-Origin: *, any website opened in the user’s browser can fetch http://127.0.0.1:6736 and read usage data because CORS allows reading the response. If that’s not intended, it may need tighter origin rules or an explicit opt-in gate for the API.
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
src-tauri/src/local_http_api.rs
Outdated
|
|
||
| #[test] | ||
| fn load_cache_returns_empty_on_missing_file() { | ||
| let dir = std::env::temp_dir().join("openusage-test-no-cache"); |
There was a problem hiding this comment.
This test uses a fixed temp path (openusage-test-no-cache), which could already exist from a previous run and contain a cache file, making the test flaky. The other tests use a unique directory name which avoids this risk.
Severity: low
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
src-tauri/src/local_http_api.rs
Outdated
| fn route_unknown_provider_returns_404() { | ||
| // Initialize cache state with known plugins | ||
| { | ||
| let mut state = cache_state().lock().unwrap(); |
There was a problem hiding this comment.
These tests mutate the global cache_state() singleton, and Rust runs tests in parallel by default, so different tests can race via shared state (e.g., snapshots insert/clear). This can lead to intermittent failures depending on execution order.
Severity: medium
Other Locations
src-tauri/src/local_http_api.rs:465src-tauri/src/local_http_api.rs:477
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
src-tauri/src/local_http_api.rs
Outdated
| #[test] | ||
| fn route_trailing_slash_tolerated() { | ||
| let resp = route("GET", "/v1/usage"); | ||
| let resp_slash = route("GET", "/v1/usage"); |
There was a problem hiding this comment.
This is meant to assert trailing-slash tolerance, but resp_slash calls route("GET", "/v1/usage") again rather than "/v1/usage/", so it doesn’t actually cover the behavior described. As written it’s a duplicate of the first assertion.
Severity: low
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
There was a problem hiding this comment.
1 issue found across 5 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/local_http_api.rs">
<violation number="1" location="src-tauri/src/local_http_api.rs:311">
P2: `Access-Control-Allow-Origin: *` lets any website the user visits read their usage data from the local API. Since this endpoint is meant for local apps (not arbitrary web pages), consider omitting CORS headers entirely, or restricting the allowed origin to specific known local origins (e.g., `tauri://localhost`).</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
src-tauri/src/local_http_api.rs
Outdated
| // --------------------------------------------------------------------------- | ||
|
|
||
| const CORS_HEADERS: &str = "\ | ||
| Access-Control-Allow-Origin: *\r\n\ |
There was a problem hiding this comment.
P2: Access-Control-Allow-Origin: * lets any website the user visits read their usage data from the local API. Since this endpoint is meant for local apps (not arbitrary web pages), consider omitting CORS headers entirely, or restricting the allowed origin to specific known local origins (e.g., tauri://localhost).
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/local_http_api.rs, line 311:
<comment>`Access-Control-Allow-Origin: *` lets any website the user visits read their usage data from the local API. Since this endpoint is meant for local apps (not arbitrary web pages), consider omitting CORS headers entirely, or restricting the allowed origin to specific known local origins (e.g., `tauri://localhost`).</comment>
<file context>
@@ -0,0 +1,536 @@
+// ---------------------------------------------------------------------------
+
+const CORS_HEADERS: &str = "\
+Access-Control-Allow-Origin: *\r\n\
+Access-Control-Allow-Methods: GET, OPTIONS\r\n\
+Access-Control-Allow-Headers: Content-Type";
</file context>
There was a problem hiding this comment.
that’s intended for this project
There was a problem hiding this comment.
Thanks for the feedback! I've saved this as a new learning to improve future reviews.
There was a problem hiding this comment.
Pull request overview
Adds a local, read-only HTTP server in the Tauri backend to expose cached plugin probe (usage) snapshots as JSON over 127.0.0.1:6736, along with documentation so other local apps can consume OpenUsage’s usage data.
Changes:
- Add
src-tauri/src/local_http_api.rsimplementing a TCP/HTTP server, CORS handling, and a disk-backed JSON cache for successful probe snapshots. - Persist successful probe outputs into the new cache and initialize/start the server during app setup.
- Make plugin output metric types
Deserialize-able and document the API indocs/local-http-api.md(linked from README).
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| src-tauri/src/plugin_engine/runtime.rs | Adds Deserialize derives so probe output types can round-trip through the cache file. |
| src-tauri/src/local_http_api.rs | New local HTTP API server + cache load/save, settings-based ordering/filtering, and unit tests. |
| src-tauri/src/lib.rs | Hooks caching into successful probe results; initializes and starts the local HTTP server at startup. |
| docs/local-http-api.md | Documents endpoints, response shape, caching behavior, and CORS. |
| README.md | Links to the new Local HTTP API documentation. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src-tauri/src/local_http_api.rs
Outdated
| fn response_json(status: u16, reason: &str, body: &str) -> String { | ||
| format!( | ||
| "HTTP/1.1 {} {}\r\nContent-Type: application/json; charset=utf-8\r\n{}\r\nContent-Length: {}\r\n\r\n{}", | ||
| status, | ||
| reason, | ||
| CORS_HEADERS, | ||
| body.len(), | ||
| body, | ||
| ) | ||
| } |
There was a problem hiding this comment.
Responses are HTTP/1.1 but don’t include a Connection: close header, even though the server only supports a single request per connection. Some clients may try to reuse the connection by default. Consider explicitly sending Connection: close (and possibly Content-Length for 204 responses) to make the server’s connection semantics unambiguous.
src-tauri/src/local_http_api.rs
Outdated
| // If settings are present, use them; otherwise apply defaults. | ||
| let has_settings = !settings_order.is_empty(); | ||
|
|
||
| let default_enabled: HashSet<&str> = DEFAULT_ENABLED_PLUGINS.iter().copied().collect(); | ||
|
|
||
| let is_enabled = |id: &str| -> bool { | ||
| if has_settings { | ||
| !disabled.contains(id) | ||
| } else { | ||
| default_enabled.contains(id) | ||
| } |
There was a problem hiding this comment.
enabled_snapshots_ordered decides has_settings based only on settings_order being non-empty, which means a settings file that only contains plugins.disabled (or an empty order) will be treated as “no settings” and the disabled set will be ignored in favor of DEFAULT_ENABLED_PLUGINS. Consider treating settings as present if either order or disabled is present (or returning an explicit has_settings flag from read_plugin_settings) so disabled plugins are respected even when order is empty/missing.
src-tauri/src/local_http_api.rs
Outdated
| for stream in listener.incoming() { | ||
| match stream { | ||
| Ok(stream) => handle_connection(stream), | ||
| Err(e) => log::debug!("local HTTP API accept error: {}", e), | ||
| } |
There was a problem hiding this comment.
The server handles connections sequentially on a single thread (handle_connection(stream) is called inline inside the incoming() loop). A client that connects and stalls (or sends a very slow request) will block the entire API from serving other clients. Consider spawning a per-connection handler (or using a small threadpool) and/or setting read timeouts so one misbehaving client can’t DoS the local API.
src-tauri/src/local_http_api.rs
Outdated
| // --------------------------------------------------------------------------- | ||
|
|
||
| const CORS_HEADERS: &str = "\ | ||
| Access-Control-Allow-Origin: *\r\n\ |
There was a problem hiding this comment.
CORS is set to Access-Control-Allow-Origin: *, which allows any website running in the user’s browser to read this localhost API (because you’re explicitly granting cross-origin access). If this data is considered sensitive/private, consider restricting allowed origins (e.g., none by default, or an allowlist) or gating the server behind a user setting/explicit opt-in.
| Access-Control-Allow-Origin: *\r\n\ | |
| Access-Control-Allow-Origin: http://127.0.0.1\r\n\ |
src-tauri/src/local_http_api.rs
Outdated
| fn route_trailing_slash_tolerated() { | ||
| let resp = route("GET", "/v1/usage"); | ||
| let resp_slash = route("GET", "/v1/usage"); | ||
| // Both should be 200 (the trailing slash is stripped in handle_connection, | ||
| // but route itself receives the cleaned path) | ||
| assert!(resp.starts_with("HTTP/1.1 200")); | ||
| assert!(resp_slash.starts_with("HTTP/1.1 200")); |
There was a problem hiding this comment.
route_trailing_slash_tolerated doesn’t actually test a trailing slash: both calls use the same path string. Also, route itself does not normalize/truncate trailing slashes, so changing the test to "/v1/usage/" would currently fail even though handle_connection trims. Consider either (a) moving the path normalization into route so it’s testable, or (b) adjusting/removing this test to reflect what’s actually being exercised.
| fn route_trailing_slash_tolerated() { | |
| let resp = route("GET", "/v1/usage"); | |
| let resp_slash = route("GET", "/v1/usage"); | |
| // Both should be 200 (the trailing slash is stripped in handle_connection, | |
| // but route itself receives the cleaned path) | |
| assert!(resp.starts_with("HTTP/1.1 200")); | |
| assert!(resp_slash.starts_with("HTTP/1.1 200")); | |
| fn route_usage_returns_200() { | |
| let resp = route("GET", "/v1/usage"); | |
| assert!(resp.starts_with("HTTP/1.1 200")); |
src-tauri/src/local_http_api.rs
Outdated
| fn handle_connection(mut stream: TcpStream) { | ||
| // Read request (up to 4 KB is plenty for a request line + headers) | ||
| let mut buf = [0u8; 4096]; | ||
| let n = match stream.read(&mut buf) { | ||
| Ok(n) => n, | ||
| Err(_) => return, | ||
| }; |
There was a problem hiding this comment.
handle_connection does a blocking stream.read(...) with no timeouts. If a client connects but never sends a full request line, this call can block indefinitely (and currently blocks the only accept loop thread). Consider setting set_read_timeout/set_write_timeout (and possibly TCP_NODELAY) before reading/writing, and returning a 408/closing on timeout.
| let path = app_data_dir.join(CACHE_FILE_NAME); | ||
| match serde_json::to_string(&file) { | ||
| Ok(json) => { | ||
| if let Err(e) = std::fs::write(&path, json) { | ||
| log::warn!("failed to write usage-api-cache.json: {}", e); | ||
| } | ||
| } |
There was a problem hiding this comment.
save_cache writes directly to the final usage-api-cache.json path. If the process crashes mid-write (or the disk fills), the cache file can become truncated/corrupted and subsequent startups will lose the cache. Consider an atomic write strategy (write to a temp file in the same dir, fsync, then rename) to make cache persistence robust.
| let mut state = cache_state().lock().expect("cache state poisoned"); | ||
| state | ||
| .snapshots | ||
| .insert(output.provider_id.clone(), snapshot); | ||
| save_cache(&state.app_data_dir, &state.snapshots); |
There was a problem hiding this comment.
cache_successful_output holds the global mutex while doing disk I/O via save_cache. This can block HTTP reads (and other cache updates) on slower disks. Consider updating snapshots under the lock, cloning the map (or just the updated snapshot set) and dropping the lock before calling save_cache.
| let mut state = cache_state().lock().expect("cache state poisoned"); | |
| state | |
| .snapshots | |
| .insert(output.provider_id.clone(), snapshot); | |
| save_cache(&state.app_data_dir, &state.snapshots); | |
| // Update in-memory state under the mutex, but perform disk I/O after releasing it. | |
| let (app_data_dir, snapshots) = { | |
| let mut state = cache_state().lock().expect("cache state poisoned"); | |
| state | |
| .snapshots | |
| .insert(output.provider_id.clone(), snapshot); | |
| (state.app_data_dir.clone(), state.snapshots.clone()) | |
| }; | |
| save_cache(&app_data_dir, &snapshots); |
src-tauri/src/local_http_api.rs
Outdated
| let state = cache_state().lock().expect("cache state poisoned"); | ||
| let snapshots = enabled_snapshots_ordered(&state); |
There was a problem hiding this comment.
handle_get_usage_collection holds the cache mutex while enabled_snapshots_ordered reads settings.json from disk. Disk reads under the lock can block concurrent cache updates and other requests. Consider reading settings outside the lock (or caching settings in memory) by copying the needed state (app_data_dir/known_plugin_ids/snapshots) and then releasing the mutex before doing file I/O and sorting/filtering.
| let state = cache_state().lock().expect("cache state poisoned"); | |
| let snapshots = enabled_snapshots_ordered(&state); | |
| // Limit the time we hold the cache mutex: compute snapshots under the lock, | |
| // then serialize and build the response after the lock is released. | |
| let snapshots = { | |
| let state = cache_state().lock().expect("cache state poisoned"); | |
| enabled_snapshots_ordered(&state) | |
| }; |
- Add 5s read timeout + per-connection thread to prevent client stalling - Fix has_settings logic to respect disabled list even with empty order - Release mutex before disk I/O in cache writes and collection reads - Add Connection: close header to all responses - Atomic cache writes via temp file + rename - Fix flaky test temp path and add #[serial] for shared-state tests - Remove broken trailing-slash test (was duplicate) - Split local_http_api.rs into cache.rs + server.rs (< 400 LOC each) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
2 issues found across 5 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/local_http_api/server.rs">
<violation number="1" location="src-tauri/src/local_http_api/server.rs:31">
P1: Unbounded thread-per-connection handling can be abused to exhaust resources; avoid spawning a new thread for every incoming socket.</violation>
<violation number="2" location="src-tauri/src/local_http_api/server.rs:44">
P2: Single-read request parsing is not TCP-safe; read until header terminator (`\r\n\r\n`) or buffer limit before routing.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| for stream in listener.incoming() { | ||
| match stream { | ||
| Ok(stream) => { | ||
| std::thread::spawn(move || handle_connection(stream)); |
There was a problem hiding this comment.
P1: Unbounded thread-per-connection handling can be abused to exhaust resources; avoid spawning a new thread for every incoming socket.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/local_http_api/server.rs, line 31:
<comment>Unbounded thread-per-connection handling can be abused to exhaust resources; avoid spawning a new thread for every incoming socket.</comment>
<file context>
@@ -0,0 +1,260 @@
+ for stream in listener.incoming() {
+ match stream {
+ Ok(stream) => {
+ std::thread::spawn(move || handle_connection(stream));
+ }
+ Err(e) => log::debug!("local HTTP API accept error: {}", e),
</file context>
|
|
||
| // Read request (up to 4 KB is plenty for a request line + headers) | ||
| let mut buf = [0u8; 4096]; | ||
| let n = match stream.read(&mut buf) { |
There was a problem hiding this comment.
P2: Single-read request parsing is not TCP-safe; read until header terminator (\r\n\r\n) or buffer limit before routing.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/local_http_api/server.rs, line 44:
<comment>Single-read request parsing is not TCP-safe; read until header terminator (`\r\n\r\n`) or buffer limit before routing.</comment>
<file context>
@@ -0,0 +1,260 @@
+
+ // Read request (up to 4 KB is plenty for a request line + headers)
+ let mut buf = [0u8; 4096];
+ let n = match stream.read(&mut buf) {
+ Ok(n) => n,
+ Err(_) => return,
</file context>
| const CORS_HEADERS: &str = "\ | ||
| Access-Control-Allow-Origin: *\r\n\ | ||
| Access-Control-Allow-Methods: GET, OPTIONS\r\n\ | ||
| Access-Control-Allow-Headers: Content-Type"; |
There was a problem hiding this comment.
Wildcard CORS exposes usage data to any website
Medium Severity
Access-Control-Allow-Origin: * on the localhost server allows any website the user visits to silently fetch('http://127.0.0.1:6736/v1/usage') and read the full response — including provider IDs, subscription plan tiers, and usage amounts. CORS is a browser-only mechanism; the stated goal of serving "other local apps" doesn't require CORS at all, since native apps bypass it entirely. The wildcard origin extends access well beyond the intended audience.
- Keep mutex held during save_cache to prevent concurrent writers from racing on the shared temp file path - Merge identical response_cors_preflight into response_no_content Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 2 total unresolved issues (including 1 from previous review).
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
|
|
||
| /// Build the ordered list of enabled cached snapshots for GET /v1/usage. | ||
| pub(super) fn enabled_snapshots_ordered(state: &CacheState) -> Vec<CachedPluginSnapshot> { | ||
| let (settings_order, disabled, has_settings) = read_plugin_settings(&state.app_data_dir); |
There was a problem hiding this comment.
Settings file I/O performed while holding cache mutex
Low Severity
enabled_snapshots_ordered calls read_plugin_settings, which performs synchronous file I/O (std::fs::read_to_string on settings.json), while the cache_state() mutex is held. This blocks cache_successful_output (called from probe spawn_blocking threads) and all concurrent HTTP handlers for the duration of that disk read. The settings read is independent of the cached snapshot data — only app_data_dir is needed to locate the file — so it could be performed before acquiring the lock, keeping the critical section limited to in-memory operations.
Additional Locations (1)
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="src-tauri/src/local_http_api/cache.rs">
<violation number="1" location="src-tauri/src/local_http_api/cache.rs:122">
P2: Avoid holding the cache_state mutex during `save_cache` I/O; it can block other cache readers/writers unnecessarily.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| fetched_at, | ||
| }; | ||
|
|
||
| let mut state = cache_state().lock().expect("cache state poisoned"); |
There was a problem hiding this comment.
P2: Avoid holding the cache_state mutex during save_cache I/O; it can block other cache readers/writers unnecessarily.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At src-tauri/src/local_http_api/cache.rs, line 122:
<comment>Avoid holding the cache_state mutex during `save_cache` I/O; it can block other cache readers/writers unnecessarily.</comment>
<file context>
@@ -119,14 +119,11 @@ pub fn cache_successful_output(output: &PluginOutput) {
- (state.app_data_dir.clone(), state.snapshots.clone())
- };
- save_cache(&app_data_dir, &snapshots);
+ let mut state = cache_state().lock().expect("cache state poisoned");
+ state
+ .snapshots
</file context>


Description
Adds a read-only HTTP server on
127.0.0.1:6736that exposes cached plugin probe results as JSON, so other local apps can consume the same usage data OpenUsage displays.Routes:
GET /v1/usage— array of enabled providers, ordered by plugin settingsGET /v1/usage/:providerId— single provider (200/204/404)OPTIONSfor CORS preflightOnly successful cached snapshots are served. Failed probes never overwrite prior cache.
iconUrlis intentionally excluded to keep payloads small.Related Issue
Type of Change
Testing
bun run buildand it succeededbun run testand all tests passbun tauri devChecklist
mainbranchNote
Medium Risk
Adds a new always-on loopback HTTP server and on-disk cache, which introduces new networking and persistence surfaces and could expose usage data to other local processes if misused. Core probe output types also become deserializable, so regressions could affect plugin output serialization/compatibility.
Overview
Adds a read-only local HTTP API on
127.0.0.1:6736that serves cached plugin probe results viaGET /v1/usage(enabled providers in settings order) andGET /v1/usage/:providerId(200/204/404), with permissive CORS andOPTIONSpreflight support.Successful probe outputs are now persisted to
usage-api-cache.jsonand updated after each non-error probe; failed probes do not overwrite the last good snapshot. This introduces a newlocal_http_apimodule (cache + minimal TCP HTTP server), makesMetricLine/PluginOutputdeserializable for cache round-trips, addsserial_testfor tests, and documents the feature inREADME.mdanddocs/local-http-api.md.Written by Cursor Bugbot for commit 33a7eed. This will update automatically on new commits. Configure here.