Skip to content

feat: http api#184

Draft
hsqStephenZhang wants to merge 2 commits intocardisoft:masterfrom
hsqStephenZhang:feat/http_api
Draft

feat: http api#184
hsqStephenZhang wants to merge 2 commits intocardisoft:masterfrom
hsqStephenZhang:feat/http_api

Conversation

@hsqStephenZhang
Copy link
Copy Markdown
Collaborator

Fix

fix #4

What this PR does?

support http api to allow for integration into alfred/raycast.

here are the key designs:

  • for simplicity and synchronous api handling, it uses rouille
  • support for offset+limit for paging
  • cache results for 20s

Try it

the workflow has been upload to my github repo, give it a shot!

Demo of Alfred integration

image image

TODOs

  • handle concurrent request gracefully.

@ldm0 ldm0 requested review from Copilot and ldm0 and removed request for Copilot April 25, 2026 12:22
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds an embedded HTTP API to the Tauri daemon so external tools (e.g., Alfred/Raycast) can query search results over localhost, including paging and short-term caching.

Changes:

  • Introduces a rouille-based HTTP server with /search (GET/POST), plus 20s in-memory caching and offset/limit paging.
  • Extends sort payload types to be serializable (for HTTP JSON + cache keys).
  • Wires the server into the app runtime and adds the rouille dependency.

Reviewed changes

Copilot reviewed 5 out of 6 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
cardinal/src-tauri/src/server.rs New HTTP server module implementing /search, caching, paging, and optional sorting.
cardinal/src-tauri/src/lib.rs Spawns the HTTP server thread and adjusts channel cloning when constructing app state.
cardinal/src-tauri/src/sort.rs Adds Serialize derives to sort payload types for HTTP usage.
cardinal/src-tauri/src/commands.rs Makes NodeInfoMetadata clonable for reuse in HTTP response payloads.
cardinal/src-tauri/Cargo.toml Adds rouille dependency.
cardinal/src-tauri/Cargo.lock Locks new transitive dependencies from rouille.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread cardinal/src-tauri/src/server.rs Outdated
Comment on lines +136 to +139
// Clear any stale results
// TODO: better logic?
let _lock = state.search_lock.lock().unwrap();
while let Ok(_) = state.result_rx.try_recv() {}
Copy link

Copilot AI Apr 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Draining state.result_rx with try_recv() to “clear stale results” is unsafe now that there are multiple Receivers (and even with one receiver it can drop valid in-flight responses). This can discard results intended for other callers and make subsequent recv() pick up an unrelated outcome. Prefer per-request reply channels (response_tx) or tagging outcomes with an ID and filtering, rather than clearing a shared queue.

Suggested change
// Clear any stale results
// TODO: better logic?
let _lock = state.search_lock.lock().unwrap();
while let Ok(_) = state.result_rx.try_recv() {}
// Do not drain `result_rx` here. This is a shared receiver, and
// blindly removing queued messages can discard a valid result for an
// earlier in-flight request and desynchronize the following `recv()`.
// The lock still serializes the request/response flow in this
// function, but stale-result handling must be done by per-request
// reply channels or by tagging outcomes with a request ID/version.
let _lock = state.search_lock.lock().unwrap();

Copilot uses AI. Check for mistakes.
Comment thread cardinal/src-tauri/src/server.rs Outdated
Comment on lines +141 to +147
let version = SEARCH_VERSION.fetch_add(1, Ordering::Relaxed);
let cancellation_token = CancellationToken::new(version);

if let Err(e) = state.search_tx.send(SearchJob {
query: req.query,
options: req.options,
cancellation_token,
Copy link

Copilot AI Apr 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CancellationToken::new(version) updates the global ACTIVE_SEARCH_VERSION (see search_cancel), which cancels any other in-progress searches with a different version. As a result, HTTP searches will cancel UI searches and vice versa, leading to intermittent empty results (nodes == None). For server-side searches, use a non-global token (e.g., CancellationToken::noop()), or extend CancellationToken to support a dedicated AtomicU64 so HTTP and UI cancellation scopes don't interfere.

Copilot uses AI. Check for mistakes.
Comment thread cardinal/src-tauri/src/server.rs Outdated
}
};

let mut fetched_indices = outcome.nodes.unwrap_or_default();
Copy link

Copilot AI Apr 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

outcome.nodes being None indicates the search was cancelled (per SearchOutcome docs). Currently this is treated as an empty result set and then cached, which can make a transient cancellation (especially due to version interference) poison the cache for 20s. Consider detecting None explicitly and either retrying, returning a distinct error, and/or skipping cache insertion for cancelled outcomes.

Suggested change
let mut fetched_indices = outcome.nodes.unwrap_or_default();
let mut fetched_indices = match outcome.nodes {
Some(nodes) => nodes,
None => {
tracing::warn!("Search was cancelled before completing; skipping cache insertion");
return Response::text("Search was cancelled").with_status_code(503);
}
};

Copilot uses AI. Check for mistakes.
Comment thread cardinal/src-tauri/src/server.rs Outdated
Comment on lines +81 to +112
pub fn start_server(state: ServerState, addr: impl ToSocketAddrs) {
let state = Arc::new(state);
rouille::start_server(addr, move |request| {
rouille::log(request, std::io::stdout(), || {
rouille::router!(request,
(GET) (/search) => {
let query = request.get_param("query").unwrap_or_default();
let limit = request.get_param("limit").and_then(|v| v.parse().ok());
let offset = request.get_param("offset").and_then(|v| v.parse().ok());
let req = SearchRequest {
query,
options: SearchOptionsPayload::default(),
limit,
offset,
sort: None,
};
handle_search(state.clone(), req)
},
(POST) (/search) => {
let req: SearchRequest = match rouille::input::json_input(request) {
Ok(r) => r,
Err(_) => return Response::text("Invalid JSON").with_status_code(400),
};
handle_search(state.clone(), req)
},
_ => Response::empty_404()
)
})
});
}

fn handle_search(state: Arc<ServerState>, req: SearchRequest) -> Response {
Copy link

Copilot AI Apr 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR adds a new HTTP API surface (routing, paging, caching, optional sort) but there are no unit/integration tests covering request parsing and the paging/sort/cache behavior. Since this crate already has Rust tests in other modules, adding a small set of tests for handle_search (or extracted pure helpers) would help prevent regressions.

Copilot uses AI. Check for mistakes.
result_rx,
node_info_tx,
search_tx.clone(),
result_rx.clone(),
Copy link

Copilot AI Apr 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

result_rx is being cloned and consumed by both the Tauri command path (SearchState) and the new HTTP server. Crossbeam channels are multi-consumer (not broadcast), so a SearchOutcome can be received by the “wrong” receiver, causing incorrect results or one side blocking forever waiting for a response that was consumed elsewhere. Consider restructuring SearchJob to include a per-request response_tx (like NodeInfoRequest), or create a dedicated search/result channel pair for the server so responses are routed deterministically.

Suggested change
result_rx.clone(),

Copilot uses AI. Check for mistakes.
Comment on lines +167 to +171
let server_state =
server::ServerState::new(search_tx.clone(), result_rx.clone(), node_info_tx.clone());
std::thread::spawn(move || {
server::start_server(server_state, "127.0.0.1:3388");
});
Copy link

Copilot AI Apr 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The HTTP server is started immediately, but the background loop that consumes search_rx/produces result_tx only starts after wait_for_logic_start(...) succeeds. If /search is called before the logic thread is running, handle_search will block on result_rx.recv() indefinitely. Consider starting the server only after the logic thread is running, or returning a 503/“not ready” response until initialization completes.

Copilot uses AI. Check for mistakes.
@hsqStephenZhang
Copy link
Copy Markdown
Collaborator Author

@copilot any advice on the design or the implementation?

Copy link
Copy Markdown
Member

@ldm0 ldm0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good start! A few nits

Comment thread cardinal/src-tauri/src/server.rs Outdated
(GET) (/search) => {
let query = request.get_param("query").unwrap_or_default();
let limit = request.get_param("limit").and_then(|v| v.parse().ok());
let offset = request.get_param("offset").and_then(|v| v.parse().ok());
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use a better-maintained web framework like axum. You can create a seperate tokio multi thread runtime for it, it's okay.

Comment thread cardinal/src-tauri/src/server.rs Outdated
results.clear();
}
}
if let Some(limit) = req.limit {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a hard limit like 10000, or with large limit number the runtime is likely to get stuck.

Comment thread cardinal/src-tauri/src/server.rs Outdated

{
let mut cache = state.cache.lock().unwrap();
cache.retain(|_, entry| entry.timestamp.elapsed() < Duration::from_secs(CACHE_TIMEOUT));
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the cache is unnecessary.

The core issue of slow queries is file metadata fetching (which won't be fetched during walk_fs). As long as all the metadata of files being searched out is fetched, it will be cached in SearchCache, and then the subsequent queries on these files would be fast enough.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fair enough

@hsqStephenZhang
Copy link
Copy Markdown
Collaborator Author

maybe it's proper to add a Sender to send the search result back in the SearchJob? otherwise the http api server and the search UI might mess each other up due the CancellationToken and incorrect usage of channel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FR][P1] Add HTTP API to the daemon

3 participants