This repository contains the application and business logic for the CMK (Customer-Managed-Keys) layer of Key Management Service.
- CMK (Customer-Managed-Keys)
- Go v1.23.0+
- GORM
- Docker or Colima
- Docker-Compose -> Currently only used for integration test setup. TODO Remove
- Helm
- K3d
Note that not all of these programs may be required depending on your environment
CMK has external dependencies which require credentials. These are stored at env/secret which are created from env/blueprints.
Run the following command to generate the env/secret files to configure
make create-empty-secrets
In order to run the full CMK workflow and correctly start the task-worker, one of the system information implementations has to be configured.
To select which plugin is used, one can specify SIS_PLUGIN in the Make target, this must also be present in the values-dev.yaml
Event processing utilizes the Orbital to send
and process events. Orbital requires target AMQP message brokers to be configured.
Additionally, if mTLS is used, certificate files need to be provided in the env/secret/event-processor directory.
They include a CA certificate to verify the server, a client certificate, and a private key.
We need to set up identity management for obtaining information relating to the identities (eg user groups)
To configure it, replace the values in env/secret/identity-management/scim.json
To sign data for client data within the HTTP requests, we need to set up a private key and public key, both can be found for local setup and testing in the env/secret/signing-keys directory after running
make generate-signing-keysThat key pair will be used to secure the requests. The private key can be used to sign the requests for tests and the related public key will be used to verify the signatures. How the sign a header can be found here function Encode.
Please also see section Debugging for details on how to debug these environments.
- Clean Namespace: Deletes all resources in the
cmknamespace to ensure a clean environment. - Install k3d: Checks if
k3dis installed; if not, it automatically installs it. - Create/Recreate Cluster: Creates or recreates a k3d cluster named
cmkcluster. - Import Docker Image: Imports a Docker image into the k3d cluster's internal registry.
- Helm Release Management: Automatically installs or upgrades the Helm release.
- Namespace Creation: If the specified namespace does not exist, the command creates it automatically.
- Set up Postgresql database: Applys Postgresql set up from bitnami repository.
- Import test data: Import test data.
- Set up port forwarding: Set up port forwarding so that the application is accessible on localhost.
make start-cmk The application should be accessible on http://localhost:8080 For example http://localhost:8080/keys
The Helm charts required for deployment are located in the ./chart directory.
Pull Helm chart repository. The Helm charts required for deployment are located in the following repository: Update here with the cmk charts location
set up env varible CMK_HELM_CHART, to point to 'charts' directory of helm-chart repository.
Example:
export CMK_HELM_CHART=/helm-charts/chartsRun:
make apply-kms-local-chartRunning the CMK application locally requires API requests to include signed client data headers for authentication.
Therefore, you need to generate these headers using the generate_client_headers.go utility.
A detailed guide can be found here.
If you encounter problems with Docker credentials (e.g., login or authentication
issues), you can modify the Docker configuration file to resolve them. The
credentials store used by Docker is specified in the ~/.docker/config.json file.
- Open the
~/.docker/config.jsonfile in a text editor. - Locate the
credsStorefield. It should look like this:
{
"credsStore": "osxkeychain" // for macOS
}The cmk application may take some time to fully start after deployment.
This is because it waits for the PostgreSQL database to become available.
If the application does not start as expected:
- Check the logs of the
cmkapplication for messages about the database connection.
kubectl logs <cmk-pod-name> -n cmk- If running with Colima ensure that resources are sufficient. The following command has been deemed sufficient:
colima start --memory 4 --disk 150Swagger UI allows to visualize and interact with the API’s resources. It is containerized and can be setup via:
make swagger-ui.
This will simply run a docker image which serves swagger-ui. It can be found at localhost:8087/swagger
Building can be via the following Make command:
make buildRunning tests can be done through a Make command:
make testGuidelines:
- Should test a small section of code, usually a function
- Should be idempotent and independent of other test input/outputs
- Shouldn't make calls to external services, if so it should use mock clients
[!NOTE] Currently there are tests that are not following the guidelines mentioned. Please fix them or create an enhancement ticket
To ensure consistency testutils where created. Please use them and enhance if needed in your use-case. Refer to code documentation on the following functions for it's usage and available options.
testutils.NewTestDB(tb testing.TB, cfg TestDBConfig, opts ...TestDBConfigOpt) (*multitenancy.DB, []string)testutils.NewAPIServer(tb testing.TB, db *multitenancy.DB, testCfg TestAPIServerConfig) *http.ServeMuxtestutils.MakeHTTPRequest(tb testing.TB, server *http.ServeMux, opt RequestOptions) *httptest.ResponseRecordertestutils.WithJSON(tb testing.TB, i any) io.Readertestutils.WithString(tb testing.TB, i any) io.Readertestutils.GetJSONBody\[t any\](tb testing.TB, w *httptest.ResponseRecorder)
testutils.New<modelType>(m func(*model.<modelType>) *model<modelType>)testutils.NewGRPCSuite(tb testing.TB, services ...systemsgrpc.ServiceServer)
Running integration tests can be done through a Make command:
make integration_testNOTE: Some integration tests require credentials. You can refer to Prerequisite chapter to setup those. If no credentials are provided the tests are skipped!
Run the following command to get a list of your pods:
sudo kubectl get pod --all-namespacesThen, using the relevant pod (usually of form cmk-XXX-YYY):
sudo kubectl logs -n cmk cmk-XXX-YYYThis should display any logs from the cmk application.
The API clients required for CMK can be generated from the OpenAPI spec. We use oapi-codegen to generate Go Code based on the OpenAPI spec
In order to generate the clients, execute make codegen with one of the listed api flag on make codegen
CMK uses context-based logging via slogctx, injecting a logger onto the context.
On API Requests, the logger is injected with default information on the logging middleware, and in other scenarios also later injected with relevant information.
- Static information can be added to all logs via values.yaml labels as documentated (ex. Target: CMK)
- Dynamic Information that's repeatable in a certain context should be injected into the logger, otherwise added as an attribute on the specific log
Our error mapping system automatically converts internal errors to structured API responses with appropriate HTTP status codes and meaningful error messages. Each operation in our API has specific error mappings that are automatically selected based on the operation ID.
The core of our error mapping system is the ErrorMap struct which associates internal errors with standardized API responses:
type ErrorMap struct {
Error []error // Internal errors to match against
Detail cmkapi.DetailedError // API response details
}When an error occurs, the system:
- Finds the appropriate error mappings for that operation
- Matches the encountered error against all possible mappings
- Selects the best matching error response
- Returns a standardized error response to the client
To add new error mappings for your feature, follow these steps:
- Define Error Constants First, define your error constants in the apierrors package:
var (
ErrMyNewError = errors.New("description of the new error")
)- Create Error Mappings Add mappings to the appropriate entity's mapping slice (e.g., system, key, keyConfiguration):
var system = []ErrorMap{
// Existing mappings...
{
Error: []error{ErrMyNewError},
Detail: cmkapi.DetailedError{
Code: "MY_NEW_ERROR_CODE",
Message: "User-friendly error message",
Status: http.StatusBadRequest,
},
},
// More specific mapping with multiple errors
{
Error: []error{ErrMyNewError, repo.ErrNotFound},
Detail: cmkapi.DetailedError{
Code: "MY_NEW_ERROR_NOT_FOUND",
Message: "Resource not found: detailed message",
Status: http.StatusNotFound,
},
},
}How Errors Are Matched
- If there is an high prio API Error on the error chain, that API Error is selected
- If API Error chain contains errors not existing in the error they are ignored
- Mapping is done with the most number of matching errors
- If no matches are found, it returns a default internal server error
This allows for precise error handling when errors are wrapped or combined.
A command-line tool for managing tenants in the database.
go build -o tenant-manager-cli ./cmd/tenant-manager-cli/main.goconfig.yaml file should be present in the same directory as the compiled binary containing database configuration. Example config.yaml:
database:
host:
source: embedded
value: localhost
user:
source: embedded
value: postgres
secret:
source: embedded
value: secret
name: cmk
port: "5432"./tenant-manager-cli <command> [flags]Run:
./tenant-manager-cli --helpto see all available commands.
Makefile target is prepared, to run the CLI commands in cluster.
make tenant-cli ARGS="<command>"A command-line tool for managing asynchronous tasks. This can gather stats, list tasks, and invoke periodic tasks manually.
The tool should be run in cluster with the presence of task queues and task workers. A makefile target is prepared, to run the CLI commands in cluster.
make task-cli ARGS="<command>"For example, to list all supported commands, run:
make task-cli ARGS="--help"A command-line tool to trigger db-migrations. It can run schema and data migrations for public and tenant schemas
This tool should be run as a K8s Job in the cluster as a helm pre-hook to run schema migrations before other deployments
To list all supported commands, run:
go build -o db-migrator ./cmd/db-migrator/
./db-migrator -hKMS dev team 2
- 0.2
- Various bug fixes and optimizations
- See commit change or See release history
- 0.1
- Initial Release
This project is licensed under the [NAME HERE] License - see the LICENSE.md file for details