Have you and your colleagues been maintaining a separate bucket policy for every encrypted S3 bucket?
💡 I've devised a consistent way to enforce S3 encryption...in one bucket or thousands...in one region or many...in one account or many...with one key or many... Simply tag each S3 bucket with the ARN of a KMS key!
🔒 Software supply chain security is on everyone's mind. This solution does not require executable code or dependencies. It creates a resource control policy, which you can read before attaching. If you do not want to test it by running Lambda functions, or by installing the AWS command-line interface locally, I also explain how to test in AWS CloudShell. I've made GitHub releases immutable.
Jump to: Installation • Protection • Testing
-
Enable attribute-based access control for an S3 bucket.
Old bucket? Old code/scripts?
Update code or scripts that write bucket tags. The permissions and the commands/API methods will change after you enable ABAC for the bucket.
- Replace
DeleteBucketTagging(only a method, not a permission) andPutBucketTaggingwithUntagResourceandTagResource. To delete tags, list their tag keys explicitly in a call toUntagResource. s3controlis the service for the new methods.s3:remains the service prefix in permission policies.- Replace
*(if you used it) witharn:aws:s3:::*to write tags on buckets only. The new permissions and methods cover other resource types, but not objects in buckets, so the*wildcard at the end of the bucket ARN pattern will not add ambiguity. Changeawsif your partition differs. - Optional: Replace
GetBucketTaggingwithListTagsForResource.
- Replace
-
Tag the S3 bucket with a KMS key ARN.
Bucket Tag Key security-s3-require-encryption-kms-key-arnBucket Tag Value (Sample) arn:aws:kms:us-east-1:112233445566:key/0123abcd-45ef-67ab-89cd-012345efabcdNow, encryption with this KMS key is required whenever a new object or object version is created, or an object is overwritten.
Existing objects or object versions are not affected.
-
Optionally, specify
aws:kmsand the same KMS key, in the bucket's default encryption configuration. Users won't have to specify the KMS key.For security, this solution requires a specific KMS key. S3 default encryption also allows a KMS key alias, a less secure configuration that is not compatible.
Detailed rules...
✓ Attribute-based access control must be enabled for the S3 bucket.
Different ways to designate the KMS key...
| KMS Key Identifier | Bucket Tag Value (Sample) |
|---|---|
| KMS key full ARN | arn:aws:kms:us-east-1:112233445566:key/0123abcd-45ef-67ab-89cd-012345efabcd |
arn:aws:kms:us-east-1:112233445566:key/mrk-01ab23cd45ef67ab89cd01ef23ab45cd * |
|
| KMS key partial ARN | 112233445566:key/0123abcd-45ef-67ab-89cd-012345efabcd |
112233445566:key/mrk-01ab23cd45ef67ab89cd01ef23ab45cd * |
|
| KMS key ID | key/0123abcd-45ef-67ab-89cd-012345efabcd |
key/mrk-01ab23cd45ef67ab89cd01ef23ab45cd * |
* Future-proofing recommendation: Create mrk-
multi-region KMS keys.
Use the KMS key policy to lock the
primary region.
Limit
replica regions
until you need replica keys in other regions.
✗ A KMS key alias cannot be used in the bucket tag value. For security, this solution requires a specific KMS key.
✓ KMS key partial ARNs are a convenience feature of this solution, meant to simplify multi-region infrastructure-as-code templates. They are allowed only in the bucket tag value. In all other contexts, add the region to compose the KMS key full ARN.
✓ This solution requires that the KMS key be in the same AWS account as the S3 bucket, unless the KMS key's account number is in the bucket tag value.
✓ The AWS-managed aws/s3 KMS key can never be used outside its own
region and AWS account. If you put the KMS key full ARN or the KMS key ID of
the AWS-managed KMS key in the bucket tag, this solution requires that the S3
bucket, the AWS-managed KMS key, and all requesters be in the same region and
AWS account.
✓ The KMS key must be in the same region as the S3 bucket.
✓ IAM role permissions and/or the KMS key policy must allow usage of the KMS key.
✓ The S3 bucket's
default encryption configuration,
if set, must specify aws:kms and the same KMS key. For uniformity, this
solution does not allow
dual-layer KMS encryption.
For security, this solution requires a specific KMS key and is not compatible
with a KMS key alias in the default encryption configuration.aws:kms:dsse
✓ The S3 Bucket Keys setting, if configured, must reference the same KMS key. Cost-saving recommendation: Set the S3 Bucket Key to reduce KMS API charges.
✓ SSE-C-encrypted objects can't be created if ABAC is enabled and the bucket is tagged. Also check that SSE-C encryption is blocked, especially if the S3 bucket was created before May, 2026.
✓ After ABAC has been enabled, you can only set or change the bucket tag to a correctly-formatted value, and you must remove the bucket tag before you can disable ABAC.
Security recommendation: Default to blocking key usage if a KMS key is in an
AWS account outside your organization. Consider a service control policy
statement with an
aws:ResourceOrgID
condition.
Depending on the S3 bucket's default encryption configuration, users may need
to specify aws:kms and the KMS key, when creating objects.
If encryption details are needed...
| Command or API Method | Option, Parameter, or Header | Input Value |
|---|---|---|
aws s3 cp |
--sse |
'aws:kms' |
--sse-kms-key-id |
⇣ | |
aws s3api put-object |
--server-side-encryption |
'aws:kms' |
--ssekms-key-id |
⇣ | |
client("s3").put_object()or equivalent in a different AWS SDK |
ServerSideEncryption= |
"aws:kms" |
SSEKMSKeyId= |
⇣ | |
PutObject |
x-amz-server-side-encryption: |
aws:kms |
x-amz-server-side-encryption-aws-kms-key-id: |
⇣ |
| KMS Key Identifier | Sample Input Value ⇣ |
|---|---|
| KMS key full ARN | arn:aws:kms:us-east-1:112233445566:key/0123abcd-45ef-67ab-89cd-012345efabcd * |
arn:aws:kms:us-east-1:112233445566:key/mrk-01ab23cd45ef67ab89cd01ef23ab45cd * |
|
| KMS key ID | 0123abcd-45ef-67ab-89cd-012345efabcd |
mrk-01ab23cd45ef67ab89cd01ef23ab45cd |
|
| KMS key alias full ARN | arn:aws:kms:us-east-1:112233445566:alias/alias_for_my_customer_managed_kms_key |
arn:aws:kms:us-east-1:112233445566:alias/aws/s3 |
|
| KMS key alias name | alias/alias_for_my_customer_managed_kms_key |
alias/aws/s3 |
✓ The KMS key specified in the PutObject request or in the bucket's
default encryption configuration must be the KMS key designated by the
security-s3-require-encryption-kms-key-arn bucket tag value.
✓ The KMS key, the S3 bucket, and the requester must be in the same region.
✓ The requester must be in the same AWS account as the KMS key and the
S3 bucket if a KMS key ID (or a KMS alias name) with no account number is
specified in the PutObject request or in the bucket's default encryption
configuration.
✓ The AWS-managed aws/s3 KMS key can never be used outside its own
region and AWS account. If you put the KMS key full ARN or the KMS key ID of
the AWS-managed KMS key in the bucket tag, this solution requires that the S3
bucket, the AWS-managed KMS key, and the requester be in the same region and
AWS account.
* Security recommendation: Create KMS keys in a separate AWS account with no other resources. This is the only way, given the default "Enable IAM User Permissions" statement, to be certain that the KMS key policy controls all access.
"AccessDenied" is the most common error when a user tries to create an object in a tagged S3 bucket.
✓ If specifying the same KMS key full ARN in the
security-s3-require-encryption-kms-key-arn bucket tag value and in the
PutObject request does not resolve the error, review the
rules,
check the IAM role's permissions (identity-based policies), and check the KMS
key policy (resource-based policy).
In case the user missed "require-encryption-kms-key-arn"... in the bucket tag key, or didn't check the bucket tag value to find the correct key, the error message tells a local administrator where to look: "explicit deny in a resource control policy", for example.
Sample error messages
-
Creating objects
The KMS key specified in the
PutObjectrequest or in the bucket's default encryption configuration is relevant here.The error for a non-existent KMS key is:
An error occurred (KMS.NotFoundException) when calling the PutObject operation: Key 'arn:aws:kms:us-east-1:112233445566:key/0123abcd-45ef-67ab-89cd-012345efabcd' does not existMost other error messages begin with...
An error occurred (AccessDenied) when calling the PutObject operation: User: arn:aws:sts::112233445566:assumed-role/AWSReservedSSO_PermSetName_0123456789abcdef/abcde is not authorized to perform:...and continue with...
-
Encryption not requested (or KMS key does not match bucket tag value)
s3:PutObject on resource: "arn:aws:s3:::test-kms-encryption-required/non-encrypted.txt" with an explicit deny in a resource control policy -
Insufficient KMS key usage permissions
kms:GenerateDataKey on this resource because no identity-based policy allows the kms:GenerateDataKey action" -
Insufficient KMS key usage permissions (key is in a different AWS account)
kms:GenerateDataKey on this resource because the resource does not exist in this Region, no resource-based policies allow access, or a resource-based policy explicitly denies access
-
-
Tagging a bucket or changing the ABAC setting
If a user tries to disable ABAC for an S3 bucket tagged with
security-s3-require-encryption-kms-key-arn, the following error occurs:An error occurred (AccessDenied) when calling the PutBucketAbac operation: User: arn:aws:sts::112233445566:assumed-role/AWSReservedSSO_PermSetName_0123456789abcdef/abcde is not authorized to perform: s3:PutBucketAbac on resource: "arn:aws:s3:::test-kms-encryption-required" with an explicit deny in a resource control policyRemove the bucket tag first, then disable ABAC.
If ABAC is enabled and a user tries to set or change the bucket tag to an incorrectly-formatted value, the error message is similar but the operation is "TagResource". Use of one of the KMS key identifier formats listed in the rules, under "Different ways to designate the KMS key..."
If the optional service control policy applies, a similar error occurs when a non-exempt user tries to add, update or delete the
security-s3-require-encryption-kms-key-arnbucket tag (for an S3 bucket with ABAC enabled) or to enable or disable ABAC for any S3 bucket. The policy type is "service control policy" and the S3 operation is one of:- "TagResource"
- "UntagResource"
- "PutBucketAbac"
Use an IAM role that is exempt from the SCP, or ask the local AWS administrator to temporarily detach the SCP from the AWS account.
List of potential causes of error
"AccessDenied" when a user tries to create an object in a tagged S3 bucket indicates that...
- The user...
- tried to create a non-encrypted S3 object,
- lacks sufficient key usage permissions for the KMS key designated by the
security-s3-require-encryption-kms-key-arnbucket tag value, or - specified a different KMS key, or
- The S3 bucket's default encryption configuration specifies...
- an inconsistent encryption type
(
SSEAlgorithm≠aws:kms) or - a different KMS key (in
KMSMasterKeyID), or
- an inconsistent encryption type
(
- The KMS key specified by the user or by the bucket's default encryption
configuration, or designated by the bucket tag value...
- is not in the same region as the S3 bucket, or
- is not in the same AWS account as the bucket (if a KMS key ID with no account number is specified), or
- is not in the same AWS account as the requester (if a KMS key ID with no account number is specified), or
- does not exist.
-
Authenticate in your AWS Organizations management account. Choose an IAM role with administrative privileges. Choose the region where you manage infrastructure-as-code templates that create non-regional resources.
-
Review AWS Organizations Settings. Make sure that the all features feature set is enabled.
Review AWS Organizations Policies. Make sure that the...
...policy types are both enabled.
-
Install using CloudFormation or Terraform.
-
CloudFormation
Easy ✓In the AWS Console, create a CloudFormation stack.
Select "Upload a template file", then select "Choose file" and navigate to a locally-saved copy of cloudformation/aws-rcp-s3-require-encryption-kms.yaml [right-click to save as...].
On the next page, set:
- Stack name:
S3RequireEncryptionKms - RCP root IDs, OU IDs, and/or AWS account ID numbers
(
RcpTargetIds): Enter the number of the account or theou-ID of the organizational unit that you use for testing resource control policies.
- Stack name:
-
Terraform
Check that you have at least:
Add the following child module to your existing root module:
module "s3_require_encryption" { source = "git::https://github.com/sqlxpert/aws-rcp-s3-require-encryption-kms.git//terraform?ref=v1.0.0" # Reference a specific version from github.com/sqlxpert/aws-rcp-s3-require-encryption-kms/releases # Check that the release is immutable! rcp_target_ids = ["112233445566", "ou-abcd-efghijkl",] }
Populate the
rcp_target_idslist with a string for the number of the account or theou-ID of the organizational unit that you use for testing resource control policies.Have Terraform download the module's source code. Review the plan before typing
yesto allow Terraform to proceed with applying the changes.terraform init terraform apply
-
-
Test the RCP as explained below.
-
Add other AWS account numbers,
ou-organizational unit IDs, or ther-root ID to apply the RCP broadly.
This project is all about scale. It's about getting a policy right one time, then generalizing it across an entire organization. Successfully scaling our work as infrastructure engineers also requires transferring knowledge and control to our "customers" -- developers, data scientists, machine learning engineers, etc. Now you can delegate permission to require encryption in S3 buckets, but in a consistent way.
Instead of policing S3 encryption-related settings in disparate Terraform modules or CloudFormation stacks that your colleagues adopt, or having to help your colleagues write encryption statements for one S3 bucket policy after another, you can now offer them a universal solution. Choose a KMS key and tag a bucket! Tag an existing bucket with the ARN of the KMS key already in use, and delete "one-off" statements from the bucket policy!
If you decide to delegate, you can choose different levels of authority for different organizational units, and for different IAM roles.
About the optional service control policy...
I provide an optional service control policy that you can apply to
organizational units to prevent non-exempt IAM roles from enabling or disabling
ABAC for any S3 bucket. For buckets with ABAC enabled, the policy also prevents
non-exempt roles from adding/changing/removing the
security-s3-require-encryption-kms-key-arn bucket tag. The lack of such a
control undermines the security of most real-world ABAC applications.
Test the SCP before applying it, because it generally reduces existing S3 permissions. Human users or automated processes might rely on those permissions.
You will need at least one SCP-exempt role in every account, to manage S3
buckets. I recommend
IAM Identity Center permission sets.
You can customize ScpPrincipalCondition / scp_principal_condition to
reference permission set roles.
SCPs do not affect roles or other IAM principals in the AWS Organizations management account.
The included SCP offers two-way protection: non-exempt roles can neither remove restrictions from S3 buckets nor place new restrictions on them. For one-way protection, that is, allowing non-exempt roles to enroll buckets but not to disenroll them, you could write an SCP that:
- does not deny use of
s3:TagResourceto add thesecurity-s3-require-encryption-kms-key-arnbucket tag, - does deny use of
s3:TagResourceto change the tag's value, - still does deny use of
s3:UntagResourceto remove the tag, and - does not deny
s3:PutBucketAbac.
On the surface, it seems that this would allow enabling and disabling attribute-based access control. (ABAC is significant because it makes S3 bucket tags effective. When ABAC is disabled, S3 bucket tag IAM condition keys are not available.) But if the bucket tag can't be removed, ABAC can't be disabled, thanks to the RCP!
You can learn more about my S3 bucket tag RCP design pattern in the ReadMe for the sister project, github.com/sqlxpert/aws-rcp-s3-require-intelligent-tiering .
Although automated testing is the only practical way to cover the many cases that the RCP was designed to handle, I also recommend that you try the manual test commands. Manual testing is a good way to learn about modern (2025 and 2026) S3 features like attribute-based access control and account-regional bucket namespaces.
Manual test commands...
-
Authenticate in your test AWS account or an account in your test organizational unit. This AWS account number must be subject to the RCP and not subject to the optional SCP. (RCPs never affect resources in your AWS Organizations management account.) Choose a role with full S3 permissions.
- I recommend using AWS CloudShell. The AWS CLI is pre-installed, AWS keeps it up-to-date for you, and there is no need to obtain AWS credentials, whether long- or hopefully short-lived, on your local computer.
-
Populate a test file, confirm the name of a new S3 bucket, and store the bucket name.
cd /tmp echo 'Test data' > test.txt DATE=$( date --utc --iso-8601 ) AWS_ACCOUNT=$( aws sts get-caller-identity --query 'Account' --output text ) read -p 'S3 bucket : ' \ -e -i "delete-after-${DATE}-${AWS_ACCOUNT}-${AWS_REGION:?'Set this first'}-an" -r S3_BUCKET_NAME
-
Create the bucket.
aws s3api create-bucket \ --create-bucket-configuration "LocationConstraint=${AWS_REGION}" \ --bucket-namespace 'account-regional' --bucket "${S3_BUCKET_NAME}"
-
Create an object encrypted with the
alias/aws/s3AWS-managed KMS key. AWS will create the key, if necessary.aws s3 cp test.txt "s3://${S3_BUCKET_NAME}" --sse 'aws:kms'
-
Confirm the bucket tag key.
read -p 'Bucket tag key: ' \ -e -i 'security-s3-require-encryption-kms-key-arn' -r S3_BUCKET_TAG_KEY
-
Get the ID of the AWS-managed KMS key. (
list-aliasesreturns KMS key IDs, not full ARNs. The RCP accepts a KMS key ID as a bucket tag value. This shorthand works as long as the user, the KMS key and the S3 bucket are all in the same AWS account.)KMS_KEY_ID=$( \ aws kms list-aliases \ --query $'Aliases[?AliasName == \'alias/aws/s3\'].TargetKeyId' \ --output 'text' \ ) read -p 'KMS key ID : ' -e -i "${KMS_KEY_ID}" -r KMS_KEY_ID
-
Enable ABAC for the bucket.
aws s3api put-bucket-abac --bucket "${S3_BUCKET_NAME}" \ --abac-status 'Status=Enabled'
-
Tag the bucket.
aws s3control tag-resource \ --account-id "${AWS_ACCOUNT}" --resource-arn "arn:aws:s3:::${S3_BUCKET_NAME}" \ --tags "Key=${S3_BUCKET_TAG_KEY},Value=${KMS_KEY_ID}"
-
Try creating an encrypted object. This should succeed. Try creating a non-encrypted object. This should produce "AccessDenied".
# # Encrypted object aws s3 cp test.txt "s3://${S3_BUCKET_NAME}" --sse 'aws:kms' # # Non-encrypted object aws s3 cp test.txt "s3://${S3_BUCKET_NAME}"
-
Try disabling ABAC for the bucket. This should produce "AccessDenied".
aws s3api put-bucket-abac --bucket "${S3_BUCKET_NAME}" \ --abac-status 'Status=Disabled'
-
Untag the bucket
aws s3control untag-resource \ --account-id "${AWS_ACCOUNT}" --resource-arn "arn:aws:s3:::${S3_BUCKET_NAME}" \ --tag-keys "${S3_BUCKET_TAG_KEY}"
-
Repeat Step 10 of these manual testing instructions. Now that the bucket is untagged, disabling ABAC should be possible.
-
Delete the bucket.
aws s3 rb "s3://${S3_BUCKET_NAME}" --force -
Continue with Step 5 of the installation instructions.
Instructions for automated RCP testing...
-
Authenticate to the AWS Console in your test AWS account or an account in your test organizational unit. This AWS account number must be subject to the RCP and not subject to the optional SCP. (RCPs never affect resources in your AWS Organizations management account.) Choose a role with full S3 permissions.
-
Create a CloudFormation stack from test/test-s3-encryption-tag-rcp.yaml .
- Copy and paste the suggested stack name. Do not change it. Creating more than one stack from this template is not supported.
- Fill in the KMS key ARN. If KMS encryption has already been used with S3
in this AWS account and region, you can view the AWS-managed
aws/s3key and copy the KMS key ARN. (This solution does not allow a KMS key alias in the bucket tag value.) - Because this is for temporary use during testing, I do not provide a Terraform alternative.
- Trouble creating the stack usually signals a local permissions problem, such as insufficient permissions attached to your IAM role, or the effect of a hidden policy such as a permissions boundary or a service control policy. For example, make sure that the AWS account number is not subject to the optional SCP, or that your role is exempt from the SCP. If you cannot resolve the problem, check with your local AWS administrator.
-
Open the TestDirector Lambda function's "Test" tab and click the orange "Test" button.
- The "Event JSON" value will be ignored.
-
Open the "All events" search page for the Test CloudWatch log group, and filter for
error. Review any errors.-
Uncaught exceptions are unexpected, and usually signal local permission problems.
-
Resource control policy tests cover a set of 8 (if the KMS key has a different AWS account number) or 10 (if it's in the same account) numbered S3 buckets with various combinations of ABAC, bucket tags, and KMS key identifiers. Each test result is a JSON object.
-
Useful CloudWatch Logs filter patterns:
Filter Pattern Scope errorAll errors timeoutLambda function timeouts (unlikely) %TEST-\d+%All tests "TEST-5."Tests on S3 bucket 5 (for example) %TEST-\d+\.0%Tests that create an unencrypted object (decimal 0) %TEST-\d+\.1%Tests that create an encrypted object %TEST-\d+\.[2-9]%Tests that change bucket tags or the ABAC setting
-
-
To re-test, open the list of log streams in the Test log group, check the topmost checkbox to select all of the log streams, then click "Delete". Return to Step 3 of these Lambda testing instructions.
- If there were timeouts, or errors changing bucket tags or the ABAC setting (decimal 2 through 9 in the test number), check the Test CloudFormation stack for drift and correct any drift before re-testing ("Stack actions" → "Detect drift", then "Stack actions" → "View drift results").
-
When you are finished, delete the Test CloudFormation stack.
- If there was an unexpected error, you might first have to delete all objects from the S3 buckets listed in the stack's "Resources" tab.
-
Continue with Step 5 of the installation instructions.
Instructions for automated SCP testing...
Testing the SCP with Lambda is similar to testing the RCP with Lambda. Differences to note:
- Test in an AWS account that is subject to both the RCP and the SCP.
- Before creating the SCP test CloudFormation stack, temporarily detach the SCP from the AWS account in which the stack will be created. (Make this change in your AWS Organizations management account.)
- The SCP test CloudFormation template is
test/test-scp-protect-s3-encryption-tag.yaml .
Set
ScpOntofalse. - If you are an advanced user, you can re-attach the SCP after creating the
SCP test CloudFormation stack but before testing. For the first round of
testing, exempt
TestScpProtectS3EncryptionTag-TesterLambdaFnRolefrom the SCP by customizingScpPrincipalCondition/scp_principal_conditionin the main CloudFormation stack or Terraform module. (Make these changes in your AWS Organizations management account.) - The direct AWS Console links for SCP testing are:
- Only three S3 buckets are needed to test the SCP. These correspond to buckets 1, 3 (ABAC) and 5 (ABAC + bucket tag) in the RCP test stack. Because the SCP tests are simpler, decimal ranges identify similar operations: 0 through 4 for changing bucket tags and 5 through 7 for changing the ABAC setting. Gaps between SCP test numbers are intentional.
- After testing without the SCP, you must re-test with the SCP. Update the
SCP test CloudFormation stack, changing
ScpOntotrue. Re-attach the SCP to the AWS account containing the CloudFormation stack. (Advanced users, revert to the originalScpPrincipalCondition/scp_principal_conditionvalue, in the main CloudFormation stack or Terraform module.) Repeat the testing process.
Please report bugs. Thank you!
| Scope | Link | Included Copy |
|---|---|---|
| Source code, and source code in documentation | GNU General Public License (GPL) 3.0 | LICENSE-CODE.md |
| Documentation, including this ReadMe file | GNU Free Documentation License (FDL) 1.3 | LICENSE-DOC.md |
Copyright Paul Marcelin
Contact: marcelin at cmu.edu (replace "at" with @)