In-memory implementations of seven AWS services written in Rust: Amazon S3, Amazon SNS, Amazon SQS, Amazon DynamoDB, AWS Lambda, Amazon Data Firehose, and Amazon MemoryDB. All services run as a single binary on separate ports, are compatible with the AWS CLI and SDKs, and require no external dependencies.
All state is held in memory — there is no disk persistence. Restarting the server clears all data.
- Rust (1.70+)
- AWS CLI v2 (for integration tests)
cargo build --release./target/release/aws-inmemory-servicesAll services start on their default ports:
| Service | Default Port |
|---|---|
| S3 | 9000 |
| SNS | 9911 |
| SQS | 9324 |
| DynamoDB | 8000 |
| Lambda | 9001 |
| Firehose | 4573 |
| MemoryDB | 6379 |
| Flag | Default | Description |
|---|---|---|
--s3-port |
9000 |
Port for the S3 service |
--sns-port |
9911 |
Port for the SNS service |
--sqs-port |
9324 |
Port for the SQS service |
--dynamodb-port |
8000 |
Port for the DynamoDB service |
--lambda-port |
9001 |
Port for the Lambda service |
--firehose-port |
4573 |
Port for the Firehose service |
--memorydb-port |
6379 |
Port for the MemoryDB service |
--region |
us-east-1 |
AWS region used in ARNs |
--account-id |
000000000000 |
AWS account ID used in ARNs |
./target/release/aws-inmemory-services --region eu-west-1 --account-id 123456789012Point the AWS CLI at the local endpoint with --endpoint-url and skip signature authentication:
# Create a bucket
aws s3api create-bucket \
--bucket my-bucket \
--endpoint-url http://localhost:9000 \
--no-sign-request
# Upload a file
aws s3api put-object \
--bucket my-bucket \
--key hello.txt \
--body hello.txt \
--endpoint-url http://localhost:9000 \
--no-sign-request
# Download a file
aws s3api get-object \
--bucket my-bucket \
--key hello.txt \
output.txt \
--endpoint-url http://localhost:9000 \
--no-sign-request
# List objects
aws s3api list-objects-v2 \
--bucket my-bucket \
--endpoint-url http://localhost:9000 \
--no-sign-request
# High-level s3 commands also work
aws s3 cp localfile.txt s3://my-bucket/key \
--endpoint-url http://localhost:9000 \
--no-sign-requestimport { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
const client = new S3Client({
endpoint: "http://localhost:9000",
region: "us-east-1",
credentials: { accessKeyId: "test", secretAccessKey: "test" },
forcePathStyle: true,
});
await client.send(new PutObjectCommand({
Bucket: "my-bucket",
Key: "hello.txt",
Body: "Hello, world!",
}));S3 uses a REST API — the HTTP method and path determine the operation:
- Bucket operations:
GET/PUT/DELETE/HEAD /{bucket} - Object operations:
GET/PUT/DELETE/HEAD /{bucket}/{key} - Query parameters distinguish sub-resources:
?versioning,?tagging,?uploads,?location,?list-type=2,?delete,?uploadId=X,?partNumber=N - Responses: XML (bucket/list operations) or raw bytes (object data)
- ETags: MD5 hex digest in double quotes, e.g.
"d41d8cd98f00b204e9800998ecf8427e"
| Operation | Method | Path | Query |
|---|---|---|---|
| CreateBucket | PUT | /{bucket} |
|
| DeleteBucket | DELETE | /{bucket} |
|
| HeadBucket | HEAD | /{bucket} |
|
| ListBuckets | GET | / |
|
| GetBucketLocation | GET | /{bucket} |
?location |
| GetBucketVersioning | GET | /{bucket} |
?versioning |
| PutBucketVersioning | PUT | /{bucket} |
?versioning |
| GetBucketTagging | GET | /{bucket} |
?tagging |
| PutBucketTagging | PUT | /{bucket} |
?tagging |
| DeleteBucketTagging | DELETE | /{bucket} |
?tagging |
| Operation | Method | Path | Query / Header |
|---|---|---|---|
| PutObject | PUT | /{bucket}/{key} |
|
| GetObject | GET | /{bucket}/{key} |
Range header for partial reads |
| DeleteObject | DELETE | /{bucket}/{key} |
|
| HeadObject | HEAD | /{bucket}/{key} |
|
| CopyObject | PUT | /{bucket}/{key} |
x-amz-copy-source header |
| ListObjectsV2 | GET | /{bucket} |
?list-type=2&prefix=&delimiter=&max-keys=&continuation-token= |
| DeleteObjects | POST | /{bucket} |
?delete (XML body) |
| GetObjectTagging | GET | /{bucket}/{key} |
?tagging |
| PutObjectTagging | PUT | /{bucket}/{key} |
?tagging |
| DeleteObjectTagging | DELETE | /{bucket}/{key} |
?tagging |
| Operation | Method | Path | Query |
|---|---|---|---|
| CreateMultipartUpload | POST | /{bucket}/{key} |
?uploads |
| UploadPart | PUT | /{bucket}/{key} |
?partNumber=N&uploadId=X |
| CompleteMultipartUpload | POST | /{bucket}/{key} |
?uploadId=X |
| AbortMultipartUpload | DELETE | /{bucket}/{key} |
?uploadId=X |
| ListMultipartUploads | GET | /{bucket} |
?uploads |
| ListParts | GET | /{bucket}/{key} |
?uploadId=X |
<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist</Message>
</Error>| HTTP Status | Code | Description |
|---|---|---|
| 200 | Success | |
| 206 | Partial Content (range request) | |
| 400 | MalformedXML | Request body is not valid XML |
| 400 | InvalidArgument | Invalid parameter |
| 404 | NoSuchBucket | Bucket does not exist |
| 404 | NoSuchKey | Object does not exist |
| 404 | NoSuchUpload | Multipart upload does not exist |
| 409 | BucketAlreadyOwnedByYou | Bucket already exists |
| 409 | BucketNotEmpty | Bucket has objects |
# Create a topic
aws sns create-topic \
--name my-topic \
--endpoint-url http://localhost:9911 \
--no-sign-request
# Create a FIFO topic
aws sns create-topic \
--name my-fifo.fifo \
--attributes FifoTopic=true,ContentBasedDeduplication=true \
--endpoint-url http://localhost:9911 \
--no-sign-request
# Subscribe
aws sns subscribe \
--topic-arn arn:aws:sns:us-east-1:000000000000:my-topic \
--protocol email \
--notification-endpoint test@example.com \
--endpoint-url http://localhost:9911 \
--no-sign-request
# Publish a message
aws sns publish \
--topic-arn arn:aws:sns:us-east-1:000000000000:my-topic \
--message "Hello from SNS!" \
--endpoint-url http://localhost:9911 \
--no-sign-request
# List topics
aws sns list-topics \
--endpoint-url http://localhost:9911 \
--no-sign-request
# Tag a topic
aws sns tag-resource \
--resource-arn arn:aws:sns:us-east-1:000000000000:my-topic \
--tags Key=Environment,Value=test \
--endpoint-url http://localhost:9911 \
--no-sign-request
# Delete a topic
aws sns delete-topic \
--topic-arn arn:aws:sns:us-east-1:000000000000:my-topic \
--endpoint-url http://localhost:9911 \
--no-sign-requestimport { SNSClient, CreateTopicCommand, PublishCommand } from "@aws-sdk/client-sns";
const client = new SNSClient({
endpoint: "http://localhost:9911",
region: "us-east-1",
credentials: { accessKeyId: "test", secretAccessKey: "test" },
});
const { TopicArn } = await client.send(new CreateTopicCommand({
Name: "my-topic",
}));
await client.send(new PublishCommand({
TopicArn,
Message: "Hello, world!",
}));SNS uses the AWS Query protocol over HTTP POST:
- Content-Type:
application/x-www-form-urlencoded - Action routing:
Action=<ActionName>form parameter - Request body: URL-encoded form parameters
- Response body: XML with
xmlns="http://sns.amazonaws.com/doc/2010-03-31/" - Endpoint:
http://localhost:<port>/
| Operation | Form Parameters |
|---|---|
| CreateTopic | Name, Attributes.entry.N.key/value, Tags.member.N.Key/Value |
| DeleteTopic | TopicArn |
| ListTopics | NextToken (optional) |
| GetTopicAttributes | TopicArn |
| SetTopicAttributes | TopicArn, AttributeName, AttributeValue |
| Operation | Form Parameters |
|---|---|
| Subscribe | TopicArn, Protocol, Endpoint, ReturnSubscriptionArn |
| Unsubscribe | SubscriptionArn |
| ConfirmSubscription | TopicArn, Token |
| ListSubscriptions | NextToken (optional) |
| ListSubscriptionsByTopic | TopicArn, NextToken (optional) |
| GetSubscriptionAttributes | SubscriptionArn |
| SetSubscriptionAttributes | SubscriptionArn, AttributeName, AttributeValue |
| Operation | Form Parameters |
|---|---|
| Publish | TopicArn, Message, Subject, MessageGroupId, MessageDeduplicationId |
| PublishBatch | TopicArn, PublishBatchRequestEntries.member.N.Id/Message/... |
| Operation | Form Parameters |
|---|---|
| TagResource | ResourceArn, Tags.member.N.Key/Value |
| UntagResource | ResourceArn, TagKeys.member.N |
| ListTagsForResource | ResourceArn |
| Attribute | Description |
|---|---|
TopicArn |
The ARN of the topic (read-only) |
Owner |
Account ID of the topic owner (read-only) |
DisplayName |
Display name for the topic |
Policy |
Access policy |
DeliveryPolicy |
Delivery retry policy |
KmsMasterKeyId |
KMS key for encryption |
FifoTopic |
Whether the topic is FIFO (set at creation) |
ContentBasedDeduplication |
Deduplication using message body hash (FIFO only) |
SubscriptionsConfirmed |
Count of confirmed subscriptions (read-only) |
SubscriptionsPending |
Count of pending subscriptions (read-only) |
| Attribute | Description |
|---|---|
SubscriptionArn |
ARN of the subscription (read-only) |
TopicArn |
ARN of the topic (read-only) |
Protocol |
Subscription protocol (read-only) |
Endpoint |
Subscription endpoint (read-only) |
Owner |
Account ID (read-only) |
RawMessageDelivery |
Deliver raw message without JSON wrapping |
FilterPolicy |
JSON filter policy for message filtering |
FilterPolicyScope |
Scope of filter policy (MessageAttributes or MessageBody) |
RedrivePolicy |
Dead-letter queue configuration |
PendingConfirmation |
Whether the subscription is pending confirmation (read-only) |
ConfirmationWasAuthenticated |
Whether the confirmation was authenticated (read-only) |
<ErrorResponse xmlns="http://sns.amazonaws.com/doc/2010-03-31/">
<Error>
<Type>Sender</Type>
<Code>NotFound</Code>
<Message>Topic does not exist</Message>
</Error>
<RequestId>uuid</RequestId>
</ErrorResponse>| HTTP Status | Code | Description |
|---|---|---|
| 200 | Success | |
| 400 | InvalidParameter | Invalid or missing parameter |
| 400 | InvalidAction | Unknown action |
| 404 | NotFound | Resource does not exist |
# Create a queue
aws sqs create-queue \
--queue-name my-queue \
--endpoint-url http://localhost:9324 \
--no-sign-request
# Send a message
aws sqs send-message \
--queue-url http://localhost:9324/000000000000/my-queue \
--message-body "Hello, world!" \
--endpoint-url http://localhost:9324 \
--no-sign-request
# Receive messages
aws sqs receive-message \
--queue-url http://localhost:9324/000000000000/my-queue \
--endpoint-url http://localhost:9324 \
--no-sign-request
# Create a FIFO queue
aws sqs create-queue \
--queue-name my-fifo.fifo \
--attributes FifoQueue=true,ContentBasedDeduplication=true \
--endpoint-url http://localhost:9324 \
--no-sign-request
# Delete a queue
aws sqs delete-queue \
--queue-url http://localhost:9324/000000000000/my-queue \
--endpoint-url http://localhost:9324 \
--no-sign-requestimport { SQSClient, SendMessageCommand } from "@aws-sdk/client-sqs";
const client = new SQSClient({
endpoint: "http://localhost:9324",
region: "us-east-1",
credentials: { accessKeyId: "test", secretAccessKey: "test" },
});
await client.send(new SendMessageCommand({
QueueUrl: "http://localhost:9324/000000000000/my-queue",
MessageBody: "Hello, world!",
}));SQS uses the AWS JSON 1.0 protocol over HTTP POST:
- Content-Type:
application/x-amz-json-1.0 - Action routing:
X-Amz-Target: AmazonSQS.<ActionName>header - Request/response body: JSON
- Endpoint:
http://localhost:<port>/
| Operation | Target | Description |
|---|---|---|
| CreateQueue | AmazonSQS.CreateQueue |
Create a standard or FIFO queue |
| DeleteQueue | AmazonSQS.DeleteQueue |
Delete a queue and all its messages |
| GetQueueUrl | AmazonSQS.GetQueueUrl |
Look up a queue URL by name |
| ListQueues | AmazonSQS.ListQueues |
List queues with optional prefix filter |
| GetQueueAttributes | AmazonSQS.GetQueueAttributes |
Retrieve queue attributes |
| SetQueueAttributes | AmazonSQS.SetQueueAttributes |
Modify queue attributes |
| PurgeQueue | AmazonSQS.PurgeQueue |
Delete all messages without deleting the queue |
| Operation | Target | Description |
|---|---|---|
| SendMessage | AmazonSQS.SendMessage |
Send a single message |
| SendMessageBatch | AmazonSQS.SendMessageBatch |
Send up to 10 messages |
| ReceiveMessage | AmazonSQS.ReceiveMessage |
Receive messages with long polling support |
| DeleteMessage | AmazonSQS.DeleteMessage |
Delete a processed message |
| DeleteMessageBatch | AmazonSQS.DeleteMessageBatch |
Delete up to 10 messages |
| ChangeMessageVisibility | AmazonSQS.ChangeMessageVisibility |
Extend/shorten visibility timeout |
| ChangeMessageVisibilityBatch | AmazonSQS.ChangeMessageVisibilityBatch |
Change visibility for up to 10 messages |
| Operation | Target | Description |
|---|---|---|
| TagQueue | AmazonSQS.TagQueue |
Add or update queue tags |
| UntagQueue | AmazonSQS.UntagQueue |
Remove queue tags |
| ListQueueTags | AmazonSQS.ListQueueTags |
List all tags on a queue |
| AddPermission | AmazonSQS.AddPermission |
Add a permission statement |
| RemovePermission | AmazonSQS.RemovePermission |
Remove a permission statement |
| Operation | Target | Description |
|---|---|---|
| ListDeadLetterSourceQueues | AmazonSQS.ListDeadLetterSourceQueues |
List queues using this queue as DLQ |
| StartMessageMoveTask | AmazonSQS.StartMessageMoveTask |
Move messages between queues |
| CancelMessageMoveTask | AmazonSQS.CancelMessageMoveTask |
Cancel an in-progress move task |
| ListMessageMoveTasks | AmazonSQS.ListMessageMoveTasks |
List move tasks for a source queue |
Standard Queues: At-least-once delivery, best-effort ordering, unlimited throughput.
FIFO Queues: Exactly-once processing within a 5-minute deduplication window, strict ordering within message groups. Queue names must end with .fifo. Require MessageGroupId on every send. Support optional ContentBasedDeduplication.
| Attribute | Default | Range | Notes |
|---|---|---|---|
VisibilityTimeout |
30s | 0--43200 | Time a received message is hidden |
MessageRetentionPeriod |
345600s (4d) | 60--1209600 | How long messages are retained |
DelaySeconds |
0 | 0--900 | Default delivery delay |
MaximumMessageSize |
262144 (256KB) | 1024--262144 | Maximum message body size |
ReceiveMessageWaitTimeSeconds |
0 | 0--20 | Default long-poll wait time |
RedrivePolicy |
none | -- | DLQ config (JSON: deadLetterTargetArn, maxReceiveCount) |
RedriveAllowPolicy |
none | -- | Controls which queues can use this as DLQ |
FifoQueue |
false | -- | Immutable after creation |
ContentBasedDeduplication |
false | -- | FIFO only. SHA-256 body hash as dedup ID |
SqsManagedSseEnabled |
true | -- | SSE with SQS-managed keys (stored, not enforced) |
{
"__type": "com.amazonaws.sqs#QueueDoesNotExist",
"message": "The specified queue does not exist."
}| HTTP Status | Meaning |
|---|---|
| 200 | Success |
| 400 | Client error (invalid parameters, missing fields) |
| 404 | Resource not found |
| 409 | Conflict (queue exists with different attributes, purge in progress) |
# Create a table
aws dynamodb create-table \
--table-name MyTable \
--attribute-definitions AttributeName=pk,AttributeType=S \
--key-schema AttributeName=pk,KeyType=HASH \
--billing-mode PAY_PER_REQUEST \
--endpoint-url http://localhost:8000 \
--no-sign-request
# Put an item
aws dynamodb put-item \
--table-name MyTable \
--item '{"pk":{"S":"key1"},"data":{"S":"value1"}}' \
--endpoint-url http://localhost:8000 \
--no-sign-request
# Get an item
aws dynamodb get-item \
--table-name MyTable \
--key '{"pk":{"S":"key1"}}' \
--endpoint-url http://localhost:8000 \
--no-sign-request
# Query
aws dynamodb query \
--table-name MyTable \
--key-condition-expression "pk = :pk" \
--expression-attribute-values '{":pk":{"S":"key1"}}' \
--endpoint-url http://localhost:8000 \
--no-sign-request
# Scan
aws dynamodb scan \
--table-name MyTable \
--endpoint-url http://localhost:8000 \
--no-sign-requestimport { DynamoDBClient, PutItemCommand, GetItemCommand } from "@aws-sdk/client-dynamodb";
const client = new DynamoDBClient({
endpoint: "http://localhost:8000",
region: "us-east-1",
credentials: { accessKeyId: "test", secretAccessKey: "test" },
});
await client.send(new PutItemCommand({
TableName: "MyTable",
Item: { pk: { S: "key1" }, data: { S: "value1" } },
}));
const { Item } = await client.send(new GetItemCommand({
TableName: "MyTable",
Key: { pk: { S: "key1" } },
}));DynamoDB uses the AWS JSON 1.0 protocol over HTTP POST:
- Content-Type:
application/x-amz-json-1.0 - Action routing:
X-Amz-Target: DynamoDB_20120810.<ActionName>header - Request/response body: JSON
- Endpoint:
http://localhost:<port>/
| Operation | Target | Description |
|---|---|---|
| CreateTable | DynamoDB_20120810.CreateTable |
Create a table with hash key or hash+range key |
| DeleteTable | DynamoDB_20120810.DeleteTable |
Delete a table and all its items |
| DescribeTable | DynamoDB_20120810.DescribeTable |
Get table metadata |
| ListTables | DynamoDB_20120810.ListTables |
List all tables |
| UpdateTable | DynamoDB_20120810.UpdateTable |
Update billing mode or provisioned throughput |
| Operation | Target | Description |
|---|---|---|
| PutItem | DynamoDB_20120810.PutItem |
Create or replace an item |
| GetItem | DynamoDB_20120810.GetItem |
Retrieve an item by primary key |
| DeleteItem | DynamoDB_20120810.DeleteItem |
Delete an item by primary key |
| UpdateItem | DynamoDB_20120810.UpdateItem |
Update specific attributes with expressions |
| Query | DynamoDB_20120810.Query |
Query items by key condition expression |
| Scan | DynamoDB_20120810.Scan |
Scan all items with optional filter |
| BatchGetItem | DynamoDB_20120810.BatchGetItem |
Get multiple items across tables |
| BatchWriteItem | DynamoDB_20120810.BatchWriteItem |
Put or delete multiple items across tables |
| Operation | Target | Description |
|---|---|---|
| TagResource | DynamoDB_20120810.TagResource |
Add tags to a table |
| UntagResource | DynamoDB_20120810.UntagResource |
Remove tags from a table |
| ListTagsOfResource | DynamoDB_20120810.ListTagsOfResource |
List tags on a table |
{
"__type": "com.amazonaws.dynamodb.v20120810#ResourceNotFoundException",
"message": "Requested resource not found"
}| HTTP Status | Code | Description |
|---|---|---|
| 200 | Success | |
| 400 | ResourceInUseException | Table already exists |
| 400 | ValidationException | Invalid parameters |
| 400 | SerializationException | Malformed request |
| 404 | ResourceNotFoundException | Table not found |
# Create a function (with a dummy zip)
aws lambda create-function \
--function-name my-func \
--runtime python3.12 \
--role arn:aws:iam::000000000000:role/test-role \
--handler index.handler \
--zip-file fileb://function.zip \
--endpoint-url http://localhost:9001 \
--no-sign-request
# Invoke a function
aws lambda invoke \
--function-name my-func \
output.json \
--endpoint-url http://localhost:9001 \
--no-sign-request
# List functions
aws lambda list-functions \
--endpoint-url http://localhost:9001 \
--no-sign-request
# Publish a version
aws lambda publish-version \
--function-name my-func \
--endpoint-url http://localhost:9001 \
--no-sign-request
# Create an alias
aws lambda create-alias \
--function-name my-func \
--name prod \
--function-version 1 \
--endpoint-url http://localhost:9001 \
--no-sign-request
# Delete a function
aws lambda delete-function \
--function-name my-func \
--endpoint-url http://localhost:9001 \
--no-sign-requestimport { LambdaClient, InvokeCommand } from "@aws-sdk/client-lambda";
const client = new LambdaClient({
endpoint: "http://localhost:9001",
region: "us-east-1",
credentials: { accessKeyId: "test", secretAccessKey: "test" },
});
const response = await client.send(new InvokeCommand({
FunctionName: "my-func",
}));Lambda uses a REST API with JSON — the HTTP method and path determine the operation:
- Functions:
GET/POST /2015-03-31/functions,GET/DELETE /2015-03-31/functions/{name} - Code/Config:
PUT /2015-03-31/functions/{name}/code,PUT /2015-03-31/functions/{name}/configuration - Invoke:
POST /2015-03-31/functions/{name}/invocations - Versions:
GET/POST /2015-03-31/functions/{name}/versions - Aliases:
GET/POST /2015-03-31/functions/{name}/aliases - Policy:
GET/POST /2015-03-31/functions/{name}/policy - Tags:
GET/POST/DELETE /2017-03-31/tags/{arn} - Event Source Mappings:
GET/POST /2015-03-31/event-source-mappings
| Operation | Method | Path |
|---|---|---|
| CreateFunction | POST | /2015-03-31/functions |
| GetFunction | GET | /2015-03-31/functions/{name} |
| DeleteFunction | DELETE | /2015-03-31/functions/{name} |
| ListFunctions | GET | /2015-03-31/functions |
| UpdateFunctionCode | PUT | /2015-03-31/functions/{name}/code |
| UpdateFunctionConfiguration | PUT | /2015-03-31/functions/{name}/configuration |
| Operation | Method | Path |
|---|---|---|
| Invoke | POST | /2015-03-31/functions/{name}/invocations |
| Operation | Method | Path |
|---|---|---|
| PublishVersion | POST | /2015-03-31/functions/{name}/versions |
| ListVersionsByFunction | GET | /2015-03-31/functions/{name}/versions |
| CreateAlias | POST | /2015-03-31/functions/{name}/aliases |
| GetAlias | GET | /2015-03-31/functions/{name}/aliases/{alias} |
| DeleteAlias | DELETE | /2015-03-31/functions/{name}/aliases/{alias} |
| ListAliases | GET | /2015-03-31/functions/{name}/aliases |
| Operation | Method | Path |
|---|---|---|
| AddPermission | POST | /2015-03-31/functions/{name}/policy |
| RemovePermission | DELETE | /2015-03-31/functions/{name}/policy/{sid} |
| GetPolicy | GET | /2015-03-31/functions/{name}/policy |
| Operation | Method | Path |
|---|---|---|
| CreateEventSourceMapping | POST | /2015-03-31/event-source-mappings |
| DeleteEventSourceMapping | DELETE | /2015-03-31/event-source-mappings/{uuid} |
| ListEventSourceMappings | GET | /2015-03-31/event-source-mappings |
| Operation | Method | Path |
|---|---|---|
| TagResource | POST | /2017-03-31/tags/{arn} |
| UntagResource | DELETE | /2017-03-31/tags/{arn} |
| ListTags | GET | /2017-03-31/tags/{arn} |
{
"Message": "Function not found: arn:aws:lambda:us-east-1:000000000000:function:my-func"
}Response includes x-amzn-ErrorType header (e.g. ResourceNotFoundException).
| HTTP Status | Error Type | Description |
|---|---|---|
| 200/202 | Success | |
| 400 | InvalidParameterValueException | Invalid parameters |
| 404 | ResourceNotFoundException | Function or resource not found |
| 409 | ResourceConflictException | Function already exists |
# Create a delivery stream
aws firehose create-delivery-stream \
--delivery-stream-name mystream \
--endpoint-url http://localhost:4573 \
--no-sign-request
# Put a record (base64-encoded data)
aws firehose put-record \
--delivery-stream-name mystream \
--record '{"Data":"SGVsbG8gV29ybGQ="}' \
--endpoint-url http://localhost:4573 \
--no-sign-request
# Put a batch of records
aws firehose put-record-batch \
--delivery-stream-name mystream \
--records '{"Data":"UmVjb3JkMQ=="}' '{"Data":"UmVjb3JkMg=="}' \
--endpoint-url http://localhost:4573 \
--no-sign-request
# Describe a delivery stream
aws firehose describe-delivery-stream \
--delivery-stream-name mystream \
--endpoint-url http://localhost:4573 \
--no-sign-request
# List delivery streams
aws firehose list-delivery-streams \
--endpoint-url http://localhost:4573 \
--no-sign-request
# Delete a delivery stream
aws firehose delete-delivery-stream \
--delivery-stream-name mystream \
--endpoint-url http://localhost:4573 \
--no-sign-requestimport { FirehoseClient, PutRecordCommand } from "@aws-sdk/client-firehose";
const client = new FirehoseClient({
endpoint: "http://localhost:4573",
region: "us-east-1",
credentials: { accessKeyId: "test", secretAccessKey: "test" },
});
await client.send(new PutRecordCommand({
DeliveryStreamName: "mystream",
Record: { Data: Buffer.from("Hello World") },
}));Firehose uses the AWS JSON 1.1 protocol over HTTP POST:
- Content-Type:
application/x-amz-json-1.1 - Action routing:
X-Amz-Target: Firehose_20150804.<ActionName>header - Request/response body: JSON
- Endpoint:
http://localhost:<port>/
| Operation | Target | Description |
|---|---|---|
| CreateDeliveryStream | Firehose_20150804.CreateDeliveryStream |
Create a delivery stream |
| DeleteDeliveryStream | Firehose_20150804.DeleteDeliveryStream |
Delete a delivery stream |
| DescribeDeliveryStream | Firehose_20150804.DescribeDeliveryStream |
Get stream metadata and status |
| ListDeliveryStreams | Firehose_20150804.ListDeliveryStreams |
List all delivery streams |
| UpdateDestination | Firehose_20150804.UpdateDestination |
Update stream destination config |
| Operation | Target | Description |
|---|---|---|
| PutRecord | Firehose_20150804.PutRecord |
Put a single record |
| PutRecordBatch | Firehose_20150804.PutRecordBatch |
Put multiple records (up to 500) |
| Operation | Target | Description |
|---|---|---|
| TagDeliveryStream | Firehose_20150804.TagDeliveryStream |
Add tags to a stream |
| UntagDeliveryStream | Firehose_20150804.UntagDeliveryStream |
Remove tags from a stream |
| ListTagsForDeliveryStream | Firehose_20150804.ListTagsForDeliveryStream |
List tags on a stream |
{
"__type": "#ResourceNotFoundException",
"message": "Delivery stream mystream under account 000000000000 not found."
}| HTTP Status | Code | Description |
|---|---|---|
| 200 | Success | |
| 400 | InvalidArgumentException | Invalid parameters |
| 400 | ResourceInUseException | Stream already exists |
| 404 | ResourceNotFoundException | Stream not found |
# Create a user
aws memorydb create-user \
--user-name myuser \
--access-string "on ~* +@all" \
--authentication-mode Type=no-password \
--endpoint-url http://localhost:6379 \
--no-sign-request
# Create an ACL
aws memorydb create-acl \
--acl-name myacl \
--user-names myuser \
--endpoint-url http://localhost:6379 \
--no-sign-request
# Create a cluster
aws memorydb create-cluster \
--cluster-name mycluster \
--node-type db.t4g.small \
--acl-name myacl \
--endpoint-url http://localhost:6379 \
--no-sign-request
# Describe clusters
aws memorydb describe-clusters \
--endpoint-url http://localhost:6379 \
--no-sign-request
# Create a snapshot
aws memorydb create-snapshot \
--cluster-name mycluster \
--snapshot-name mysnap \
--endpoint-url http://localhost:6379 \
--no-sign-request
# Delete a cluster
aws memorydb delete-cluster \
--cluster-name mycluster \
--endpoint-url http://localhost:6379 \
--no-sign-requestimport { MemoryDBClient, CreateClusterCommand } from "@aws-sdk/client-memorydb";
const client = new MemoryDBClient({
endpoint: "http://localhost:6379",
region: "us-east-1",
credentials: { accessKeyId: "test", secretAccessKey: "test" },
});
await client.send(new CreateClusterCommand({
ClusterName: "mycluster",
NodeType: "db.t4g.small",
ACLName: "myacl",
}));MemoryDB uses the AWS JSON 1.1 protocol over HTTP POST:
- Content-Type:
application/x-amz-json-1.1 - Action routing:
X-Amz-Target: AmazonMemoryDB.<ActionName>header - Request/response body: JSON
- Endpoint:
http://localhost:<port>/
| Operation | Target | Description |
|---|---|---|
| CreateCluster | AmazonMemoryDB.CreateCluster |
Create a cluster |
| DeleteCluster | AmazonMemoryDB.DeleteCluster |
Delete a cluster |
| DescribeClusters | AmazonMemoryDB.DescribeClusters |
Describe one or all clusters |
| UpdateCluster | AmazonMemoryDB.UpdateCluster |
Update cluster configuration |
| Operation | Target | Description |
|---|---|---|
| CreateSubnetGroup | AmazonMemoryDB.CreateSubnetGroup |
Create a subnet group |
| DeleteSubnetGroup | AmazonMemoryDB.DeleteSubnetGroup |
Delete a subnet group |
| DescribeSubnetGroups | AmazonMemoryDB.DescribeSubnetGroups |
Describe subnet groups |
| Operation | Target | Description |
|---|---|---|
| CreateUser | AmazonMemoryDB.CreateUser |
Create a user |
| DeleteUser | AmazonMemoryDB.DeleteUser |
Delete a user |
| DescribeUsers | AmazonMemoryDB.DescribeUsers |
Describe users |
| UpdateUser | AmazonMemoryDB.UpdateUser |
Update a user's access string |
| Operation | Target | Description |
|---|---|---|
| CreateACL | AmazonMemoryDB.CreateACL |
Create an access control list |
| DeleteACL | AmazonMemoryDB.DeleteACL |
Delete an ACL |
| DescribeACLs | AmazonMemoryDB.DescribeACLs |
Describe ACLs |
| UpdateACL | AmazonMemoryDB.UpdateACL |
Add or remove users from an ACL |
| Operation | Target | Description |
|---|---|---|
| CreateSnapshot | AmazonMemoryDB.CreateSnapshot |
Create a snapshot of a cluster |
| DeleteSnapshot | AmazonMemoryDB.DeleteSnapshot |
Delete a snapshot |
| DescribeSnapshots | AmazonMemoryDB.DescribeSnapshots |
Describe snapshots |
| Operation | Target | Description |
|---|---|---|
| TagResource | AmazonMemoryDB.TagResource |
Add tags to a resource |
| UntagResource | AmazonMemoryDB.UntagResource |
Remove tags from a resource |
| ListTags | AmazonMemoryDB.ListTags |
List tags on a resource |
{
"__type": "ClusterNotFoundFault",
"message": "Cluster mycluster not found"
}| HTTP Status | Code | Description |
|---|---|---|
| 200 | Success | |
| 400 | ClusterAlreadyExistsFault | Cluster already exists |
| 400 | UserAlreadyExistsFault | User already exists |
| 400 | ACLAlreadyExistsFault | ACL already exists |
| 400 | InvalidParameterValue | Invalid parameters |
| 404 | ClusterNotFoundFault | Cluster not found |
| 404 | UserNotFoundFault | User not found |
| 404 | ACLNotFoundFault | ACL not found |
The integration test suites use the AWS CLI to exercise all API operations:
# Run S3 tests (49 assertions)
bash tests/s3_integration.sh
# Run SNS tests (42 assertions)
bash tests/sns_integration.sh
# Run SQS tests (70 assertions)
bash tests/sqs_integration.sh
# Run DynamoDB tests (30 assertions)
bash tests/dynamodb_integration.sh
# Run Lambda tests (28 assertions)
bash tests/lambda_integration.sh
# Run Firehose tests (18 assertions)
bash tests/firehose_integration.sh
# Run MemoryDB tests (29 assertions)
bash tests/memorydb_integration.shEach script builds the binary, starts the server on isolated ports, runs all test cases, and reports pass/fail counts.
This is a local development tool, not a production replacement. Key differences:
- In-memory only -- all state is lost when the server stops. No disk persistence or replication.
- No authentication -- all requests are accepted without signature verification. Use
--no-sign-request. - No TLS -- the server speaks plain HTTP only.
- Single-process -- no distributed behavior.
- S3 versioning -- versioning status can be toggled but version history is not maintained. Only the latest version of each object is stored.
- SNS subscriptions auto-confirm -- all subscriptions are immediately confirmed without requiring endpoint verification.
- SNS message delivery -- messages are accepted and assigned IDs but not actually delivered to endpoints. Use this service for API compatibility testing, not delivery testing.
- SQS permissions stored but not enforced --
AddPermission/RemovePermissionupdate the queue's policy, but no access checks are performed. - DynamoDB expressions -- basic
KeyConditionExpression,UpdateExpression(SET, REMOVE),FilterExpression, andProjectionExpressionare supported. Advanced features like condition expressions with complex operators, transactions, GSIs/LSIs, and streams are not implemented. - Lambda invocation --
Invokereturns a stub 200 response. Functions are not actually executed. Use this for API compatibility testing. - Firehose delivery -- records are accepted and stored in memory but not delivered to any destination. Use this for API compatibility testing.
- MemoryDB clusters -- clusters are created with simulated metadata (endpoints, shards, nodes) but no actual Redis instances are started.
- No CloudWatch metrics -- no metrics integration.
- Encryption attributes are accepted but not applied -- KMS-related attributes are stored but data is not encrypted.
- Upload size limit -- S3 supports uploads up to 5 GB per request (axum body limit).