feat: http api#184
Conversation
There was a problem hiding this comment.
Pull request overview
Adds an embedded HTTP API to the Tauri daemon so external tools (e.g., Alfred/Raycast) can query search results over localhost, including paging and short-term caching.
Changes:
- Introduces a
rouille-based HTTP server with/search(GET/POST), plus 20s in-memory caching and offset/limit paging. - Extends sort payload types to be serializable (for HTTP JSON + cache keys).
- Wires the server into the app runtime and adds the
rouilledependency.
Reviewed changes
Copilot reviewed 5 out of 6 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| cardinal/src-tauri/src/server.rs | New HTTP server module implementing /search, caching, paging, and optional sorting. |
| cardinal/src-tauri/src/lib.rs | Spawns the HTTP server thread and adjusts channel cloning when constructing app state. |
| cardinal/src-tauri/src/sort.rs | Adds Serialize derives to sort payload types for HTTP usage. |
| cardinal/src-tauri/src/commands.rs | Makes NodeInfoMetadata clonable for reuse in HTTP response payloads. |
| cardinal/src-tauri/Cargo.toml | Adds rouille dependency. |
| cardinal/src-tauri/Cargo.lock | Locks new transitive dependencies from rouille. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Clear any stale results | ||
| // TODO: better logic? | ||
| let _lock = state.search_lock.lock().unwrap(); | ||
| while let Ok(_) = state.result_rx.try_recv() {} |
There was a problem hiding this comment.
Draining state.result_rx with try_recv() to “clear stale results” is unsafe now that there are multiple Receivers (and even with one receiver it can drop valid in-flight responses). This can discard results intended for other callers and make subsequent recv() pick up an unrelated outcome. Prefer per-request reply channels (response_tx) or tagging outcomes with an ID and filtering, rather than clearing a shared queue.
| // Clear any stale results | |
| // TODO: better logic? | |
| let _lock = state.search_lock.lock().unwrap(); | |
| while let Ok(_) = state.result_rx.try_recv() {} | |
| // Do not drain `result_rx` here. This is a shared receiver, and | |
| // blindly removing queued messages can discard a valid result for an | |
| // earlier in-flight request and desynchronize the following `recv()`. | |
| // The lock still serializes the request/response flow in this | |
| // function, but stale-result handling must be done by per-request | |
| // reply channels or by tagging outcomes with a request ID/version. | |
| let _lock = state.search_lock.lock().unwrap(); |
| let version = SEARCH_VERSION.fetch_add(1, Ordering::Relaxed); | ||
| let cancellation_token = CancellationToken::new(version); | ||
|
|
||
| if let Err(e) = state.search_tx.send(SearchJob { | ||
| query: req.query, | ||
| options: req.options, | ||
| cancellation_token, |
There was a problem hiding this comment.
CancellationToken::new(version) updates the global ACTIVE_SEARCH_VERSION (see search_cancel), which cancels any other in-progress searches with a different version. As a result, HTTP searches will cancel UI searches and vice versa, leading to intermittent empty results (nodes == None). For server-side searches, use a non-global token (e.g., CancellationToken::noop()), or extend CancellationToken to support a dedicated AtomicU64 so HTTP and UI cancellation scopes don't interfere.
| } | ||
| }; | ||
|
|
||
| let mut fetched_indices = outcome.nodes.unwrap_or_default(); |
There was a problem hiding this comment.
outcome.nodes being None indicates the search was cancelled (per SearchOutcome docs). Currently this is treated as an empty result set and then cached, which can make a transient cancellation (especially due to version interference) poison the cache for 20s. Consider detecting None explicitly and either retrying, returning a distinct error, and/or skipping cache insertion for cancelled outcomes.
| let mut fetched_indices = outcome.nodes.unwrap_or_default(); | |
| let mut fetched_indices = match outcome.nodes { | |
| Some(nodes) => nodes, | |
| None => { | |
| tracing::warn!("Search was cancelled before completing; skipping cache insertion"); | |
| return Response::text("Search was cancelled").with_status_code(503); | |
| } | |
| }; |
| pub fn start_server(state: ServerState, addr: impl ToSocketAddrs) { | ||
| let state = Arc::new(state); | ||
| rouille::start_server(addr, move |request| { | ||
| rouille::log(request, std::io::stdout(), || { | ||
| rouille::router!(request, | ||
| (GET) (/search) => { | ||
| let query = request.get_param("query").unwrap_or_default(); | ||
| let limit = request.get_param("limit").and_then(|v| v.parse().ok()); | ||
| let offset = request.get_param("offset").and_then(|v| v.parse().ok()); | ||
| let req = SearchRequest { | ||
| query, | ||
| options: SearchOptionsPayload::default(), | ||
| limit, | ||
| offset, | ||
| sort: None, | ||
| }; | ||
| handle_search(state.clone(), req) | ||
| }, | ||
| (POST) (/search) => { | ||
| let req: SearchRequest = match rouille::input::json_input(request) { | ||
| Ok(r) => r, | ||
| Err(_) => return Response::text("Invalid JSON").with_status_code(400), | ||
| }; | ||
| handle_search(state.clone(), req) | ||
| }, | ||
| _ => Response::empty_404() | ||
| ) | ||
| }) | ||
| }); | ||
| } | ||
|
|
||
| fn handle_search(state: Arc<ServerState>, req: SearchRequest) -> Response { |
There was a problem hiding this comment.
This PR adds a new HTTP API surface (routing, paging, caching, optional sort) but there are no unit/integration tests covering request parsing and the paging/sort/cache behavior. Since this crate already has Rust tests in other modules, adding a small set of tests for handle_search (or extracted pure helpers) would help prevent regressions.
| result_rx, | ||
| node_info_tx, | ||
| search_tx.clone(), | ||
| result_rx.clone(), |
There was a problem hiding this comment.
result_rx is being cloned and consumed by both the Tauri command path (SearchState) and the new HTTP server. Crossbeam channels are multi-consumer (not broadcast), so a SearchOutcome can be received by the “wrong” receiver, causing incorrect results or one side blocking forever waiting for a response that was consumed elsewhere. Consider restructuring SearchJob to include a per-request response_tx (like NodeInfoRequest), or create a dedicated search/result channel pair for the server so responses are routed deterministically.
| result_rx.clone(), |
| let server_state = | ||
| server::ServerState::new(search_tx.clone(), result_rx.clone(), node_info_tx.clone()); | ||
| std::thread::spawn(move || { | ||
| server::start_server(server_state, "127.0.0.1:3388"); | ||
| }); |
There was a problem hiding this comment.
The HTTP server is started immediately, but the background loop that consumes search_rx/produces result_tx only starts after wait_for_logic_start(...) succeeds. If /search is called before the logic thread is running, handle_search will block on result_rx.recv() indefinitely. Consider starting the server only after the logic thread is running, or returning a 503/“not ready” response until initialization completes.
|
@copilot any advice on the design or the implementation? |
ldm0
left a comment
There was a problem hiding this comment.
This is a good start! A few nits
| (GET) (/search) => { | ||
| let query = request.get_param("query").unwrap_or_default(); | ||
| let limit = request.get_param("limit").and_then(|v| v.parse().ok()); | ||
| let offset = request.get_param("offset").and_then(|v| v.parse().ok()); |
There was a problem hiding this comment.
Let's use a better-maintained web framework like axum. You can create a seperate tokio multi thread runtime for it, it's okay.
| results.clear(); | ||
| } | ||
| } | ||
| if let Some(limit) = req.limit { |
There was a problem hiding this comment.
Add a hard limit like 10000, or with large limit number the runtime is likely to get stuck.
|
|
||
| { | ||
| let mut cache = state.cache.lock().unwrap(); | ||
| cache.retain(|_, entry| entry.timestamp.elapsed() < Duration::from_secs(CACHE_TIMEOUT)); |
There was a problem hiding this comment.
I think the cache is unnecessary.
The core issue of slow queries is file metadata fetching (which won't be fetched during walk_fs). As long as all the metadata of files being searched out is fetched, it will be cached in SearchCache, and then the subsequent queries on these files would be fast enough.
There was a problem hiding this comment.
fair enough
|
maybe it's proper to add a Sender to send the search result back in the |
Fix
fix #4
What this PR does?
support http api to allow for integration into alfred/raycast.
here are the key designs:
rouilleTry it
the workflow has been upload to my github repo, give it a shot!
Demo of Alfred integration
TODOs