How to Deploy
All environments use the same Helm chart at infrastructure/helm/luckyplans/ with per-environment values files. See Helm Deployment for full design decisions.
Continuous delivery is handled by ArgoCD (pull-based GitOps). See ArgoCD for the operational guide.
Architecture Overview
Traefik Ingress (k3d built-in)
│ │ │ │
│ / │ /graphql │ /auth/* │ /uploads/*
▼ ▼ ▼ ▼
web:3000 api-gateway:3001 ─────────────┘
│ │ │
│ Redis │ OIDC │ S3 API
▼ ▼ ▼
service-core Keycloak MinIO:9000
│ │ │
▼ ▼ ▼
Redis:6379 PostgreSQL /data (PVC)
▲
prisma-migrate ─┘ (Helm pre-upgrade Job)
monitoring namespace:
OTel Collector ← api-gateway, service-core (OTLP)
├── Prometheus (metrics)
├── Loki (logs via Promtail)
└── Tempo (traces)
└── Grafana (dashboards)
App services run in the luckyplans namespace. Observability services run in the monitoring namespace.
Prerequisites
| Tool | Version | Install |
|---|---|---|
| Docker | Latest | docker.com |
| k3d | Latest | k3d.io |
| kubectl | Latest | kubernetes.io |
| Helm | >= 3.0 | helm.sh |
| cert-manager | v1.17.1 | See Install cert-manager (prod only) |
| kubeseal | Latest | sealed-secrets releases (prod only) |
Install k3d
# Linux/Mac
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
# Windows (via Chocolatey)
choco install k3d
Install cert-manager
Required for prod deployment (automatic TLS via Let’s Encrypt). Not needed for local development.
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.17.1/cert-manager.yaml
kubectl -n cert-manager rollout status deploy/cert-manager
kubectl -n cert-manager rollout status deploy/cert-manager-webhook
kubectl -n cert-manager rollout status deploy/cert-manager-cainjector
See TLS Certificates for full documentation.
Environments
| local | prod | |
|---|---|---|
| Cluster | k3d on laptop | k3d on VPS / on-premises |
| CD method | Direct Helm | ArgoCD (auto-sync) |
| Values file | values.yaml | values.yaml + values.prod.yaml |
| Image registry | none (k3d import) | ghcr.io |
| Image tags | latest | sha-<commit> (CI) / semver (manual) |
| Replicas | 1 | 2 |
| TLS | off | on (cert-manager) |
Local Deployment (laptop)
Single command (full deploy)
pnpm deploy:local
This handles everything: cluster creation, image builds (app + observability), k3d import, and helm upgrade --install for both the app and observability charts.
After completion:
- Frontend: http://localhost
- GraphQL Playground: http://localhost/graphql
- Grafana:
kubectl -n monitoring port-forward svc/grafana 3002:3000→ http://localhost:3002 - Prometheus:
kubectl -n monitoring port-forward svc/prometheus 9090:9090→ http://localhost:9090
Targeted deploy (rebuild one or more services)
# Rebuild and redeploy only the web frontend:
./infrastructure/scripts/deploy-local.sh web
# Rebuild multiple services:
./infrastructure/scripts/deploy-local.sh api-gateway web
# Helm upgrade only — for config/secret changes (no image builds):
./infrastructure/scripts/deploy-local.sh --helm-only
# Skip observability stack (faster if you don't need monitoring):
./infrastructure/scripts/deploy-local.sh --no-observability
Targeted deploy builds only the specified service images, imports them into k3d, and does a kubectl rollout restart — much faster than a full deploy.
Teardown
pnpm deploy:teardown
Status
pnpm deploy:status
Manual step-by-step
# 1. Create the k3d cluster
k3d cluster create luckyplans-local \
--port "80:80@loadbalancer" \
--port "443:443@loadbalancer" \
--agents 1
kubectl config use-context k3d-luckyplans-local
# 2. Build Docker images
docker build \
--build-arg NEXT_PUBLIC_GRAPHQL_URL="/graphql" \
-t luckyplans/web:latest -f apps/web/Dockerfile .
docker build -t luckyplans/api-gateway:latest -f apps/api-gateway/Dockerfile .
docker build -t luckyplans/service-core:latest -f apps/service-core/Dockerfile .
docker build -t luckyplans/prisma-migrate:latest -f packages/prisma/Dockerfile .
# 3. Import images into k3d
docker pull redis:7-alpine
docker pull postgres:17-alpine
k3d image import redis:7-alpine -c luckyplans-local
k3d image import postgres:17-alpine -c luckyplans-local
k3d image import luckyplans/web:latest -c luckyplans-local
k3d image import luckyplans/api-gateway:latest -c luckyplans-local
k3d image import luckyplans/service-core:latest -c luckyplans-local
k3d image import luckyplans/prisma-migrate:latest -c luckyplans-local
# 4. Deploy with Helm
helm upgrade --install luckyplans infrastructure/helm/luckyplans \
--namespace luckyplans \
--create-namespace \
--rollback-on-failure --timeout 3m
CI/CD with ArgoCD (recommended)
Prod deployments are handled by ArgoCD GitOps. See ArgoCD for the full operational guide.
How it works
Push/merge to main → CI → Docker Build & Push → Update Tags → ArgoCD auto-sync → smoke tests
Manual tag deployment
- Go to Actions → Update Image Tags → Run workflow
- Enter the image tag (e.g.
sha-abc1234) - Click Run workflow
ArgoCD will auto-sync the new tag.
Prod Deployment — first-time setup
Prerequisites
- DNS A record:
yourdomain.xyz → <your-server-ip> - Ports open on firewall: 22 (SSH), 80 (HTTP/ACME), 443 (HTTPS)
- A GitHub PAT with
read:packagesand repo read access CD_PUSH_TOKEN: fine-grained PAT with Contents: read+write scope (required with branch protection)kubesealCLI installed locally
Setup steps
# SSH into the prod server
# 1. Create the k3d cluster
k3d cluster create luckyplans-prod \
--port "80:80@loadbalancer" \
--port "443:443@loadbalancer" \
--agents 1
# 2. Install cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.17.1/cert-manager.yaml
kubectl -n cert-manager rollout status deploy/cert-manager
# 3. Install Sealed Secrets controller
./infrastructure/scripts/install-sealed-secrets.sh
# 4. Install ArgoCD
git clone https://github.com/takeshi-su57/luckyplans.git
cd luckyplans
./infrastructure/scripts/install-argocd.sh --github-token <your-github-pat>
# 5. Generate and seal production secrets
./infrastructure/scripts/seal-secrets.sh
# IMPORTANT: Save the plain-text output! You'll need KEYCLOAK_CLIENT_SECRET
# and KEYCLOAK_ADMIN_PASSWORD for step 7.
# 6. Paste the sealedSecrets block into values.prod.yaml
# Commit and push — ArgoCD will auto-sync and deploy all services
# 7. Sync Keycloak client secret (after Keycloak is running)
# a. Log into Keycloak Admin at https://yourdomain.xyz/admin
# Username: admin | Password: the KEYCLOAK_ADMIN_PASSWORD from step 5
# b. Select realm: luckyplans (top-left dropdown)
# c. Go to: Clients → luckyplans-frontend → Credentials tab
# d. Set the Client Secret to the KEYCLOAK_CLIENT_SECRET from step 5
# (paste the plain-text value the script printed, then click Save)
# e. Both the gateway and Keycloak now use the same secret — auth works.
#
# Alternative: if you prefer to use Keycloak's auto-generated secret:
# a. Copy the secret from Keycloak Admin → Clients → Credentials
# b. Re-seal it: ./seal-secrets.sh --seal-only KEYCLOAK_CLIENT_SECRET=<copied-value>
# c. Update values.prod.yaml with the new sealed value, commit and push
Secrets Management (Sealed Secrets)
Production secrets are managed via Bitnami Sealed Secrets. Encrypted values are committed to git in values.prod.yaml — only the cluster controller can decrypt them. See ADR: Bitnami Sealed Secrets.
How it works
- The
sealed-secrets-controllerruns in the prod cluster (installed once viainstall-sealed-secrets.sh) seal-secrets.shgenerates secrets, encrypts each with the controller’s public key- Encrypted values are stored in
values.prod.yamlundersealedSecrets.encryptedData - The Helm chart renders a
SealedSecretCRD (not a plainSecret) in production - The controller decrypts the
SealedSecretinto a standardluckyplans-secretsK8s Secret
Required secrets
| Secret | Purpose |
|---|---|
JWT_SECRET | API Gateway JWT token signing |
SESSION_SECRET | API Gateway session encryption |
KEYCLOAK_CLIENT_SECRET | Keycloak OIDC client authentication |
POSTGRES_PASSWORD | PostgreSQL database (used by PostgreSQL + Keycloak) |
KEYCLOAK_ADMIN_PASSWORD | Keycloak admin console login |
MINIO_ACCESS_KEY | MinIO root user (S3 access key for file uploads) |
MINIO_SECRET_KEY | MinIO root password (S3 secret key for file uploads) |
Rotating secrets
# Re-generate and re-seal all secrets
./infrastructure/scripts/seal-secrets.sh
# Paste the new encryptedData into values.prod.yaml
# Commit and push — ArgoCD auto-syncs, pods restart with new secrets
After rotating KEYCLOAK_CLIENT_SECRET, update Keycloak Admin Console:
Clients → luckyplans-frontend → Credentials → set the new value.
Important: All secrets in the table above must be present in
values.prod.yamlundersealedSecrets.encryptedData. If any are missing, the corresponding Pod will fail to start because itssecretKeyRefcannot resolve — and any PVCs it mounts will stay inWaitForFirstConsumerindefinitely.
Local dev secrets
Local development uses plain-text dev defaults in values.yaml — no sealed secrets needed. The realm export pre-configures dev-client-secret for Keycloak.
Key backup
The controller’s signing key is backed up during installation to .sealed-secrets-backup/. Store this backup securely offline — if lost, all sealed secrets must be re-created.
Scaling
# values.prod.yaml
apiGateway:
replicas: 3
web:
replicas: 2
Commit and push — ArgoCD auto-syncs.
Warning: Do not use
kubectl scaleon ArgoCD-managed clusters — ArgoCD self-heal will revert the change immediately.
Observability (K8s)
The observability stack is deployed as a separate Helm chart (infrastructure/helm/observability/) in the monitoring namespace. In production, it’s managed by its own ArgoCD Application (infrastructure/argocd/apps/observability-prod.yaml).
Components: OTel Collector, Prometheus, Grafana, Loki, Tempo, Promtail, Redis Exporter.
# Check observability pods:
kubectl -n monitoring get pods
# Port-forward Grafana:
kubectl -n monitoring port-forward svc/grafana 3002:3000
# Port-forward Prometheus:
kubectl -n monitoring port-forward svc/prometheus 9090:9090
NestJS services push traces and metrics via OTLP to the OTel Collector. Promtail ships pod logs to Loki. Grafana provides unified dashboards with trace↔log correlation.
Viewing Logs
kubectl -n luckyplans logs -f deployment/api-gateway
kubectl -n luckyplans logs -f deployment/web
kubectl -n luckyplans logs <pod-name> --previous # after crash
# Observability stack logs:
kubectl -n monitoring logs -f deployment/otel-collector
kubectl -n monitoring logs -f deployment/grafana
Rollback
ArgoCD-managed (prod)
Git revert (recommended):
git log --oneline -5
git revert <commit-sha>
git push origin main
Local (no ArgoCD)
helm -n luckyplans rollback luckyplans