I'm the kind of engineer who reads the RFC before writing the config. I invest in understanding how things work, then ship small, tested increments to validate what I learned. That foundation lets me iterate fast without breaking what matters.
- Scaling and securing AI infrastructure: capacity planning, service scaling, WAF security, and LLM prompt guardrails
- Improving CI/CD pipelines for LLM deployments: linting, integration testing
- Building agentic workflows for faster iteration: code review loops, cross-system investigations, automated test fixing
What I use on a daily basis
- Infra deployment: Terraform, CloudFormation, GitLab CI/CD, github workflows, feature branch deployments, integration testing, unit testing (Playwright, Vitest, Jest, Pytest)
- Cloud platform deployments
- AWS services: CloudFront, ALB, ECS Fargate, EC2, VPC, Route53, S3, Lambda
- Azure services: Traffic Manager, Application Gateway, API Management, Keyvault
- Production-ready AI infrastructure
- Azure OpenAI, AWS Bedrock, Atlas Vector Search
- API Management (load balancing, retry/fallback), WAF, private endpoints, Managed Identity
- Datadog, Azure Monitor, custom LLM usage metrics
- Agentic workflows: Claude Code, Claude API
- System Analysis: Azure CLI, AWS CLI, Well-Architected Framework, AI-assisted exploration

