Compute platform
This page covers how WardMitra workloads should run on AWS.
EKS as the backend platform
Amazon EKS is the recommended control plane for backend services because it gives SPWHI a scalable and repeatable home for APIs, workers, and future AI workloads.
Baseline node strategy
Use two layers of compute management:
aws_eks_node_groupfor baseline on-demand capacity- Karpenter for burst and spot-based scaling
Why this model
- it is simpler for junior engineers than managing every node choice manually
- it avoids
eksctl, which would create a second source of truth outside Terraform - it keeps a stable minimum capacity while still allowing cost-aware scaling
Workload placement
| Workload | Recommended location | Notes |
|---|---|---|
| Web frontend | S3 + CloudFront | do not run static frontend on EKS |
| API | EKS baseline nodes | main application path |
| Admin services | EKS baseline nodes | can share cluster, separate namespace/service accounts |
| Async workers | EKS + Karpenter scale-out | good fit for burst traffic |
| AI inference | isolated NodePool | taints/tolerations and separate scaling |
AI isolation
Do not mix AI-heavy workloads with the core API path. Reserve a separate NodePool for AI if and when it becomes active. That keeps complaint processing or model inference from starving the main citizen-facing workflows.
Frontend delivery architecture
The web app should be deployed as a static site:
- build with React/Vite
- upload artifacts to an S3 bucket
- serve through CloudFront
- route the API domain separately through ALB and EKS
This is one of the most important simplifications for SPWHI to adopt early. Running static frontend content on Kubernetes adds operational effort without delivering value.
Basic starting posture
For SPWHI's current team, the safest starting compute posture is:
- small but stable baseline node capacity
- no GPU path until it is actually needed
- separate namespaces and service accounts early
- introduce Karpenter after the baseline cluster is well understood