Cloud-native image library: users sign in with ASP.NET Core Identity, upload images, optionally apply processing (e.g. grayscale), and get AI-generated descriptions (Azure Computer Vision). Images and metadata live in Azure Blob Storage and PostgreSQL; processing is driven by Event Hubs workers.
- User registration and login with local ASP.NET Core Identity
- Secure user session management and logout
- Image upload with optional processing during upload
- Built-in grayscale image processing option
- AI-generated image description
- Email notifications after upload/processing completion (Azure Logic Apps)
- Personal image library management:
- List uploaded images
- View image details/content
- Delete images
- Upload additional images anytime
| Area | Choice |
|---|---|
| UI | React (Vite) SPA |
| API & workers | .NET 10, Clean Architecture |
| Containers | Docker images for API, workers, and frontend (build/push details in devops/README.md) |
| Local stack | Docker Compose — docker-compose.yml, docker-compose.backend.yml, docker-compose-infra.yml (see devops/README.md) |
| Production compute | Azure Kubernetes Service (AKS) — workloads deployed with Helm (devops/helm/cloud-native-image-processing) |
| Data | PostgreSQL (EF Core), Azure Managed Redis, Azure Blob Storage |
| Messaging | Azure Event Hubs (image-processing, ai-description; local emulator in Compose) |
| Production edge | Azure Front Door, WAF/DDoS (typical reference design) |
| Notifications | Azure Logic Apps (email after upload/processing) |
| Observability | Azure Monitor (Log Analytics, AKS diagnostics + Container Insights, optional Application Insights for app telemetry) |
| AI | Azure Computer Vision (image description) |
Use a single guide: devops/README.md — local Docker Compose steps and production Terraform → scripts → Helm (Key Vault only, no manual cluster secrets).
For UI or backend development details only, see the component READMEs linked in the table above.
- Prerequisites: Docker and Docker Compose v2 (
docker compose) - From the repository root:
docker compose up -d --build- Access the application at http://localhost:5173
- Go to Azure Portal https://portal.azure.com/
- Sign up for a new account
- Start using Azure for free
- Enter information: personal information, billing information, payment method, etc. (required new credit card to be eligible for free trial)
- In Azure Portal, search for
Microsoft Entra ID - Go to
App registrations - Click on add new button
- Enter information: application name (e.g.
Github Actions) - Proceed to create the app registration
-
Go to the app registration you created in step 2
-
Go to
Certificates & Secrets -
Switch tab to
Federated credentials -
Click on add new button
-
Select scenario
Github Actions deploying Azure resources -
Enter information:
- Orgnanization:
<your github username>(e.g.trannamtrung1st) - Repository:
cloud-native-image-processing - Entity type:
Environment - Github environment name:
Production - Credential name:
github-actions
- Orgnanization:
-
Proceed to create the federated credential
- In Azure Portal, click on
Subscriptions - Click on add new button
- Enter information: subscription name, plan type, etc.
- Proceed to create the subscription
- Go to the newly created subscription
- Go to
Access control (IAM) - Click on add new
Role assignmentbutton - Switch tab to
Privileged administrator roles, chooseOwnerrole - Click
NexttoMembersstep - Choose
User, group, or service principal - Click
Select members, search forGithub Actions - Click
NexttoConditionsstep - Choose
Allow user to assign all roles except privileged administrator roles Owner, UAA, RBAC (Recommended) - Click
NexttoReview + assignstep - Proceed to assign the role
Same as step 5, but choose Storage Blob Data Owner role and no need conditions
- Go to the newly created subscription
- Go to
Settings->Resource providers - Search for
Microsoft.Storage - Click on
Registerbutton
- Go to this repository
- Click on
Forkbutton - Choose your GitHub account (keep the repository name as is)
- Proceed to fork the repository
- Go to the forked repository
- Go to
Actions - Click on
I understand my workflows, go ahead and enable thembutton
- Got to
Settings->Environments - Click on
New environmentbutton - Enter environment name:
Production - Proceed to create the environment
- In the environment
Productionpage, findEnvironment variablessection - Click on add new button
- Add new variables:
Variable Name Variable Value TERRAFORM_USE_REMOTE_STATE true USE_TERRAFORM_OUTPUTS true
- In the environment
Productionpage, findEnvironment secretssection - Click on add new button
- Add new secrets:
Secret Name Secret Value AZURE_CLIENT_ID Microsoft Entra ID->App Registrations->All applications-> click on your app -> copyApplication (client) IDAZURE_TENANT_ID Microsoft Entra ID-> copyTenant IDKEYVAULT_ADMIN_PRINCIPAL_IDS Microsoft Entra ID->Users-> your user -> copyObject IDAZURE_SUBSCRIPTION_ID Subscriptions-> go to your newly created subscription -> copySubscription IDTERRAFORM_TFVARS This repository -> Copy devops/terraform/terraform.tfvars.examplefile contentTF_STATE_CONTAINER cnip-terraformTF_STATE_KEY production/tf.stateTF_STATE_RESOURCE_GROUP cnip-terraformTF_STATE_STORAGE_ACCOUNT <random-storage-account-name>, from 3-24 lowercase letters and numbers (e.g.cnipsatrungtran)
- Go to
Actions - Select
Terraform (manual)workflow - Choose
Run workflowto open modal - For
Run terraform plan only, or plan+apply, chooseapply - Enter
applyconfirmation - (One time only) Click on
Bootstrap tfstate backend (resource group/storage account/container) before init - Click on
Run workflowbutton - Wait for the workflow to finish
- Go to
Actions - Select
Deploy to Azureworkflow - Choose
Run workflowto open modal - Enter
<version-number>(e.g.1.0.0) - Click on
Run workflowbutton - Wait for the workflow to finish
- In Azure Portal, search
Public IP addresses - Click on IP
kubernetes-... - Copy the DNS label and paste it into your browser