Async Rust SDK for Oracle Cloud Infrastructure. Supports API key, Instance Principal, and OKE Workload Identity authentication. Built on tokio + reqwest with connection pooling tuned for high-throughput workloads.
The crate defaults to rustls with the ring crypto provider.
Choose a backend at compile time:
# Default (rustls + ring)
oci-sdk = "0.4"
# rustls + aws-lc
oci-sdk = { version = "0.4", default-features = false, features = ["tls-rustls-aws-lc"] }
# native-tls
oci-sdk = { version = "0.4", default-features = false, features = ["tls-native"] }Exactly one TLS feature must be enabled:
tls-rustls-ring(default)tls-rustls-aws-lctls-native
| Service | Client | Operations |
|---|---|---|
| Object Storage | ObjectStorageClient |
list_objects, head_object, get_object (streaming), put_object, delete_object, restore_objects |
| DataFlow | DataFlowClient |
Spark job/run management |
| Identity | Identity |
User management |
| NoSQL | Nosql |
Table operations |
| Queue | Queue |
Message queue operations |
| Secrets | Secrets |
Secret retrieval |
All clients accept any type implementing AuthProvider. Three implementations are included:
Reads from ~/.oci/config by default. Supports DEFAULT profile or a named profile.
use oci_sdk::auth::ConfigFileAuth;
// Default: ~/.oci/config, [DEFAULT] profile
let auth = ConfigFileAuth::from_file(None, None)?;
// Custom path and profile
let auth = ConfigFileAuth::from_file(
Some("/path/to/config".to_string()),
Some("MY_PROFILE".to_string()),
)?;Config file format:
[DEFAULT]
user=ocid1.user.oc1...<unique_ID>
fingerprint=20:3b:97:13:55:1c:...
tenancy=ocid1.tenancy.oc1...<unique_ID>
region=us-ashburn-1
key_file=~/.oci/oci_api_key.pemFor workloads running on OCI Compute instances. Credentials are fetched automatically from the Instance Metadata Service (IMDS) and refreshed before expiry.
use oci_sdk::auth::InstancePrincipalAuth;
use std::sync::Arc;
// Region auto-detected from IMDS
let auth = Arc::new(InstancePrincipalAuth::new(None));
// Or pin to a specific region
let auth = Arc::new(InstancePrincipalAuth::new(Some("us-ashburn-1".to_string())));For workloads running in Oracle Kubernetes Engine. Exchanges the pod's K8s service account token for an OCI resource principal session token via the in-cluster proxymux service.
Prerequisites: OKE cluster with Workload Identity enabled, KUBERNETES_SERVICE_HOST set, service account token mounted at the standard path.
use oci_sdk::auth::OkeWorkloadIdentityAuth;
// Auto-configure from environment
let auth = OkeWorkloadIdentityAuth::new()?;
// Or use the builder for explicit config
let auth = OkeWorkloadIdentityAuth::builder()
.region("us-ashburn-1".to_string())
.sa_token_path("/var/run/secrets/kubernetes.io/serviceaccount/token".to_string())
.build()?;use oci_sdk::objectstorage::{ObjectStorageClient, ListObjectsRequest, RestoreObjectsDetails};
use oci_sdk::auth::ConfigFileAuth;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let auth = ConfigFileAuth::from_file(None, None)?;
let client = ObjectStorageClient::new(auth, "my-namespace", None).await?;
// List objects with prefix
let request = ListObjectsRequest {
prefix: Some("archive/2024/"),
fields: Some("name,size,storageTier,archivalState"),
limit: Some(100),
..Default::default()
};
let resp = client.list_objects("my-bucket", &request).await?;
for obj in &resp.objects {
println!("{} ({:?})", obj.name, obj.storage_tier);
}
// Pagination
if let Some(next) = &resp.next_start_with {
let next_page = ListObjectsRequest {
prefix: Some("archive/2024/"),
start: Some(next),
..Default::default()
};
let _page2 = client.list_objects("my-bucket", &next_page).await?;
}
// Restore from archive (1–240 hours)
let details = RestoreObjectsDetails::new("archive/2024/data.tar.gz")
.hours(48);
client.restore_objects("my-bucket", &details).await?;
// Check restore status
let meta = client.head_object("my-bucket", "archive/2024/data.tar.gz").await?;
println!("Archival state: {:?}", meta.archival_state);
// Streaming download
use futures_util::StreamExt;
use tokio::io::AsyncWriteExt;
let resp = client.get_object("my-bucket", "archive/2024/data.tar.gz").await?;
let mut file = tokio::fs::File::create("data.tar.gz").await?;
let mut stream = resp.stream;
while let Some(chunk) = stream.next().await {
file.write_all(&chunk?).await?;
}
// Upload
let data = tokio::fs::read("local-file.bin").await?;
client.put_object("my-bucket", "path/to/object", data.into(), None).await?;
Ok(())
}use oci_sdk::objectstorage::ObjectStorageError;
match client.list_objects("bucket", &request).await {
Ok(resp) => { /* handle */ }
Err(ObjectStorageError::RateLimited { retry_after_secs, .. }) => {
eprintln!("Rate limited, retry after {:?}s", retry_after_secs);
}
Err(ObjectStorageError::Api { status, code, message, .. }) => {
eprintln!("API error {}: {} - {}", status, code, message);
}
Err(e) => return Err(e.into()),
}ObjectStorageError variants:
RateLimited { opc_request_id, retry_after_secs }— HTTP 429Api { status, code, message, opc_request_id }— any other non-2xxAuth(AuthError)— signing or credential failureHttp(reqwest::Error)— transport-level errorSerialization(serde_json::Error)— JSON parse failure
Build with the cli feature:
cargo build --release --features cli# List objects
oci-objectstorage -n <namespace> -b <bucket> list-objects --prefix some/path/ --limit 100
# JSON output
oci-objectstorage -n <namespace> -b <bucket> list-objects --json
# Head object (metadata + archival state)
oci-objectstorage -n <namespace> -b <bucket> head-object --name path/to/object
# Restore from archive
oci-objectstorage -n <namespace> -b <bucket> restore-object --name path/to/object --hours 48
# Download (streaming)
oci-objectstorage -n <namespace> -b <bucket> get-object --name path/to/object --output ./local-file
# Upload
oci-objectstorage -n <namespace> -b <bucket> put-object --name path/to/object --file ./local-file
# Cross-region
oci-objectstorage -n <namespace> -b <bucket> -r us-phoenix-1 list-objects
# Instance Principal auth
oci-objectstorage -a instance-principal -n <namespace> -b <bucket> list-objects# List runs in a compartment
oci-dataflow -r us-ashburn-1 list-runs -c <compartment-id>
# Get run details
oci-dataflow -r us-ashburn-1 get-run <run-id>The default client is tuned for high throughput:
pool_max_idle_per_host: 64pool_idle_timeout: 90stcp_keepalive: 60s
For custom tuning, use with_client:
use std::time::Duration;
let http_client = reqwest::Client::builder()
.pool_max_idle_per_host(128)
.pool_idle_timeout(Duration::from_secs(120))
.tcp_keepalive(Duration::from_secs(60))
.build()?;
let client = ObjectStorageClient::with_client(auth, "namespace", None, http_client).await?;Note:
put_objectrequires the full object body in memory to compute thex-content-sha256header required by OCI. For objects larger than ~100MB, multipart upload is recommended (not yet implemented).
Use oci-emulator for local development and testing:
docker run -d --name oci-emulator -p 12000:12000 cameritelabs/oci-emulator:latestPass the emulator URL as the service_endpoint (the region parameter is ignored when using a custom endpoint — construct the client and override the endpoint directly, or use the emulator-aware constructors if your client supports them).
cargo testFor coverage with tarpaulin:
cargo install tarpaulin
cargo tarpaulin --out LcovCoverage is uploaded to codecov. View locally in VSCode with Coverage Gutters.
MIT