OpenWire is an OkHttp-inspired async HTTP client for Rust.
It uses hyper for HTTP protocol state, but owns the client-side semantics
around request policy, route planning, connection pooling, fast fallback, and
protocol binding. The default executor/timer and TLS integrations are Tokio and
Rustls.
It is aimed at cases where a plain protocol client is not enough and the networking layer needs clear policy behavior, reusable transport building blocks, and stable observability hooks.
Client,ClientBuilder, and one-shotCalloverhttp::Request<RequestBody>- request-scoped timeout, retry, and redirect overrides through
Call - application and network interceptors
- built-in
LoggerInterceptorwithLogLevel::{Basic, Headers, Body} - event listeners and stable request / connection observability
- retries, redirects, cookies, and origin / proxy authentication follow-ups
- HTTP forward proxy, HTTPS CONNECT proxy, and SOCKS5 proxy support,
including
socks5://user:pass@host:portcredentials and proxy-endpoint fast fallback - dynamic per-request proxy selection via
ProxySelector, including ordered proxy candidate fallback andDIRECT, withProxyRulesas the built-in rule-based implementation - custom DNS, TCP, TLS, executor, and timer hooks
- an owned connection core with route planning, pooling, and direct HTTP/1.1 / HTTP/2 protocol binding
RequestBody::absent()for typical no-body requests andRequestBody::explicit_empty()when zero-length framing must be explicit- optional JSON helpers behind the
jsonfeature - optional WebSocket (RFC 6455) client behind the
websocketfeature, with a pluggableWebSocketEnginetrait and a built-in native codec openwire-cacheas a separate application-layer cache crate
crates/openwire: public client API, policy layer, transport integrationcrates/openwire-cache: cache interceptor and in-memory cache storecrates/openwire-core: shared body, error, event, executor/timer, transport, and policy traitscrates/openwire-tokio: Tokio executor, timer, I/O, DNS, and TCP adapterscrates/openwire-rustls: default Rustls TLS connectorcrates/openwire-test: local test support
Tokio-specific adapters are imported from openwire-tokio directly; openwire
keeps the client API and higher-level policy / planning surfaces.
use http::Request;
use openwire::{Client, RequestBody};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::builder().build()?;
let request = Request::builder()
.uri("http://example.com/")
.body(RequestBody::absent())?;
let response = client.execute(request).await?;
println!("status = {}", response.status());
Ok(())
}Request-scoped overrides stay on the canonical execution path:
use std::time::Duration;
let response = client
.new_call(request)
.call_timeout(Duration::from_secs(2))
.connect_timeout(Duration::from_millis(250))
.follow_redirects(false)
.execute()
.await?;These per-request retry and redirect overrides target the built-in scalar policy
knobs. Custom RetryPolicy and RedirectPolicy objects remain client-scoped.
OpenWire includes an OkHttp-style LoggerInterceptor that can be attached as an
application interceptor for logical-call logging or as a network interceptor for
post-normalization, per-attempt wire logging:
use http::Request;
use openwire::{Client, LogLevel, LoggerInterceptor, RequestBody};
let client = Client::builder()
.application_interceptor(LoggerInterceptor::new(LogLevel::Body))
.build()?;
let request = Request::builder()
.method("POST")
.uri("https://api.example.com/users")
.header("content-type", "application/json")
.header("authorization", "Bearer secret")
.body(RequestBody::from_static(br#"{"name":"Alice","age":18}"#))?;
let response = client.execute(request).await?;
println!("status = {}", response.status());LogLevel::Body pretty-prints JSON with serde_json::to_writer_pretty, redacts
Authorization, Proxy-Authorization, Cookie, and Set-Cookie by default,
and only buffers bodies when they are replayable and bounded. Streaming request
bodies, chunked responses, SSE, upgraded protocols, and oversized bodies are
logged as omitted placeholders instead of being fully drained into memory.
The WebSocket path still bypasses the interceptor chain today, so
LoggerInterceptor covers HTTP calls made through Client::execute(...) /
Call::execute() rather than Client::new_websocket(...).
Proxy routing is configured through a selector so the active proxy can change at execution time. A selector can return multiple candidates for one request; the transport tries them in order within the same logical attempt:
use openwire::{Client, Proxy, ProxySelection, ProxySelector};
#[derive(Clone)]
struct MobileSelector;
impl ProxySelector for MobileSelector {
fn select(&self, _uri: &http::Uri) -> Result<ProxySelection, openwire::WireError> {
Ok(ProxySelection::new()
.push_proxy(Proxy::https("http://proxy-a.local:8080")?)
.push_proxy(Proxy::https("http://proxy-b.local:8080")?)
.push_direct())
}
}
let client = Client::builder()
.proxy_selector(MobileSelector)
.build()?;ProxyRules remains available when a simple ordered rule list is enough.
Once a proxied attempt succeeds, later auth and redirect follow-ups in the same
logical call prefer that proxy first so proxy-authorization state stays bound
to the proxy that actually handled the request.
Client::builder() currently defaults to:
- pooled idle connection eviction after 5 minutes
- at most 5 idle pooled connections per address
- at most 64 in-flight requests across the client
- at most 5 in-flight requests per address
These request and pool limits are bounded by address, not only origin host. If a
caller needs the previous unbounded request-admission or idle-pool behavior, set
the corresponding knobs explicitly, for example with usize::MAX.
Today the project includes:
- request execution through
Client::execute(...)andCall::execute() - application and network interceptors
- retry, redirect, cookie, and authenticator follow-up handling
- HTTP forward proxy, HTTPS CONNECT proxy, and SOCKS5 proxy support
- owned HTTP/1.1 and HTTP/2 bindings via
hyper::client::conn - connection pooling, fast fallback, and route planning
- optional cache integration in
openwire-cache - an opt-in live-network smoke suite outside the required CI path
cargo check --workspace --all-targets
cargo test --workspace --all-targets
cargo bench -p openwire --bench perf_baseline -- --noplotOptional live-network smoke suite:
cargo test -p openwire --test live_network -- --ignored --test-threads=1This suite is opt-in, hits public internet endpoints, and is not part of the required CI gate.
The repository also provides a separate GitHub Actions workflow at
.github/workflows/live-network.yml for manual dispatches and weekly scheduled
runs without affecting the required CI path.
Deferred public-origin follow-ons are intentionally kept out of this baseline when they require external credentials, temporary remote resources, untrusted public proxies, or timing-sensitive assertions that public networks cannot make credible. Those follow-ons are tracked in docs/live-network-follow-ups.md.
Detailed execution flow, transport layering, and extension boundaries are in docs/ARCHITECTURE.md.
Error-handling review, current gaps, and the long-term failure-model roadmap are tracked in docs/error-handling-roadmap.md.