How to Develop Locally
Prerequisites
| Tool | Version | Install |
|---|---|---|
| Node.js | >= 20.0.0 | nodejs.org |
| pnpm | >= 9.0.0 | npm install -g pnpm@9.15.4 |
| Docker Compose | Latest | docker.com |
Quick Start
# First time — installs deps, sets up .env, builds shared packages:
pnpm setup
# Start infrastructure (Redis, Keycloak, observability stack):
docker compose up -d
# Start all services with hot reload:
pnpm dev
Open http://localhost:3000 — Next.js rewrites proxy /auth/* and /graphql to the gateway.
Step-by-Step Setup
1. Install dependencies
pnpm install
2. Set up environment variables
cp .env.example .env
Default values work for local development — including MinIO credentials (MINIO_ACCESS_KEY=minioadmin, MINIO_SECRET_KEY=minioadmin). The only value you must fill in is SESSION_SECRET:
# Generate a session secret:
openssl rand -base64 32
Paste the output into SESSION_SECRET= in your .env file.
3. Start infrastructure
Docker Compose provides the infrastructure and observability services needed for local dev:
docker compose up -d
| Container | Port | Purpose |
|---|---|---|
| redis | 6379 | Inter-service communication + sessions |
| postgresql-keycloak | 5433 | Keycloak database |
| postgresql-app | 5434 | Application database (Prisma / service-core) |
| keycloak | 8080 | Identity provider |
| minio | 9000 (API), 9001 (Console) | Object storage for file uploads |
| otel-collector | 4317 | OpenTelemetry Collector (gRPC), also 4318 (HTTP), 8889 (Prometheus) |
| prometheus | 9090 | Metrics storage |
| grafana | 3002 | Dashboards (admin/admin) |
| loki | 3100 | Log aggregation |
| tempo | 3200 | Distributed tracing |
Verify everything is healthy:
docker compose ps
Wait for Keycloak to show healthy status (takes ~30 seconds on first start).
To stop infrastructure:
docker compose down # stop containers (PostgreSQL data preserved)
docker compose down -v # stop and remove volumes (reset all data)
4. Build shared packages and run database migrations
pnpm --filter @luckyplans/shared build
pnpm --filter @luckyplans/prisma build
pnpm --filter @luckyplans/prisma db:migrate:deploy
Note: The
postgresql-appcontainer auto-creates theluckyplansdatabase via itsPOSTGRES_DBenv var on first start. If you run into connection issues, trydocker compose down -v && docker compose up -dto reset all volumes.
5. Start all services
pnpm dev
Turborepo starts everything in parallel with hot reload:
| Service | Internal Port | Access URL | Description |
|---|---|---|---|
| Frontend | 3000 | http://localhost:3000 | Next.js app (rewrites proxy to gateway) |
| API Gateway | 3001 | http://localhost:3000/graphql | GraphQL (via Next.js rewrite) |
| service-core | — | — (Redis transport) | Core domain microservice |
Note: The browser talks to
localhost:3000. Next.js rewrites innext.config.tsproxy/auth/*and/graphqlto the API Gateway (port 3001). This same-domain routing is what allows thesession_idcookie to be shared between the frontend and the gateway.
6. Verify everything works
- Open http://localhost:3000 — you should see the landing page
- Navigate to any protected route (e.g.,
/dashboard) — you should be redirected to/login - Register a new account or log in with the test user:
- Email:
testuser@luckyplans.xyz - Password:
password
- Email:
- After login, you should be redirected back to the app with a valid session
You can also verify the GraphQL endpoint directly:
# Health check (no auth required):
curl http://localhost:3001/graphql -H 'Content-Type: application/json' \
-d '{"query":"{ health }"}'
7. MinIO console (optional)
Access the MinIO Console at http://localhost:9001 :
- Username:
minioadmin - Password:
minioadmin
MinIO provides S3-compatible object storage for file uploads (avatars, project images). The luckyplans-uploads bucket is auto-created on first upload.
8. Keycloak admin console (optional)
Access the Keycloak admin console at http://localhost:8080 :
- Admin user:
admin - Admin password:
admin - Realm:
luckyplans
The luckyplans realm is auto-imported from infrastructure/keycloak/realm-export.json on first start. It includes the luckyplans-frontend client (ROPC enabled, service account with manage-users role) and a test user. Keycloak data is persisted in PostgreSQL — restarts preserve users and realm configuration.
How Authentication Works Locally
The auth flow uses gateway-managed sessions — no tokens are exposed to the browser:
- Browser navigates to a protected route
- Next.js middleware checks for a
session_idcookie — if absent, redirects to/login - User enters credentials on the custom login page (
/login) - Form submits
POST /auth/loginwith{ email, password }(proxied to gateway via Next.js rewrite) - Gateway authenticates with Keycloak using ROPC grant, creates a Redis session, sets an HttpOnly
session_idcookie - Frontend redirects to the original protected route
- All subsequent GraphQL requests include the cookie automatically (
credentials: 'include') - The gateway’s
SessionGuardvalidates the session and injects the user into the request context
Registration works similarly: the custom /register page submits to POST /auth/register, which creates the user via Keycloak Admin API, then auto-logs in.
To log out, click the “Log out” button — it sends POST /auth/logout, which clears the session and cookie.
Logging
All NestJS services use Pino (nestjs-pino) for structured JSON logging with automatic OpenTelemetry trace correlation. Every log line includes traceId and spanId so you can jump from a log to its trace in Grafana.
How to log in services
Use the NestJS Logger from @nestjs/common — it routes through Pino automatically:
import { Injectable, Logger } from '@nestjs/common';
@Injectable()
export class MyService {
private readonly logger = new Logger(MyService.name);
doSomething() {
this.logger.log('Processing request'); // info level
this.logger.warn('Cache miss, falling back'); // warn level
this.logger.error('Failed to connect', error); // error level
this.logger.debug('Detailed debug info'); // debug level (dev only)
}
}
What NOT to do
// ❌ Never use console — it bypasses structured logging and trace correlation
console.log('something');
console.warn('something');
console.error('something');
// ✅ Always use NestJS Logger
this.logger.log('something');
this.logger.warn('something');
this.logger.error('something', error);
Log output
In development (LOG_LEVEL=debug), Pino pretty-prints with colors:
[10:23:45] INFO (api-gateway): Processing request
traceId: "abc123..."
spanId: "def456..."
req: {"method":"POST","url":"/graphql"}
In production (LOG_LEVEL=info), raw JSON goes to stdout for Loki to collect:
{"level":"info","time":1710000000,"msg":"Processing request","traceId":"abc123...","spanId":"def456...","context":"MyService"}
Trace context in logs
The traceId and spanId are injected automatically by the Pino mixin() configured in AppModule. You don’t need to do anything — just use Logger and trace IDs appear in every log line within an active request.
Working on Specific Services
Run a single service
pnpm --filter @luckyplans/web dev
pnpm --filter @luckyplans/api-gateway dev
pnpm --filter @luckyplans/service-core dev
Infrastructure (Redis, Keycloak) must be running via
docker compose up -dregardless of which app services you start.
Build a single service
pnpm --filter @luckyplans/web build
pnpm --filter @luckyplans/api-gateway build
Working with GraphQL
Regenerate frontend types
After modifying a graphql() call in any hook file or updating the schema:
pnpm --filter @luckyplans/web codegen
This reads apps/api-gateway/schema.graphql (single source of truth) and regenerates typed helpers in apps/web/src/generated/.
Schema location
The frontend codegen reads the schema directly from apps/api-gateway/schema.graphql — there is no duplicate schema file in the web app. The gateway auto-generates this file from code-first decorators on startup (dev mode). When the gateway schema changes:
- Start the gateway (or run
pnpm dev) so it regeneratesapps/api-gateway/schema.graphql - Run codegen:
pnpm --filter @luckyplans/web codegen - Commit the updated schema and generated types
Working with Prisma Migrations
Prisma schema lives at packages/prisma/prisma/schema.prisma. Migrations are auto-generated SQL files committed to git.
Creating a migration
pnpm --filter @luckyplans/prisma db:migrate:dev -- --name describe_the_change
This generates a SQL file at packages/prisma/prisma/migrations/<timestamp>_<name>/migration.sql, applies it locally, and regenerates the Prisma Client.
Migration safety rules
Prisma generates DDL (schema changes) but never backfills data. Adding a non-nullable column without a default to a table with existing rows will fail.
| Scenario | Safe? | What to do |
|---|---|---|
| New table with required columns | Yes | No existing rows — safe |
New nullable column (String?) on existing table | Yes | Existing rows get NULL |
New column with @default() on existing table | Yes | Existing rows get the default |
New required column (no ?, no @default) on existing table | No | Migration will fail if table has rows |
When you need a required column on an existing table
Use the two-step migration pattern:
- Migration 1: Add column as nullable, deploy
- Backfill: Update existing rows via script or SQL (
UPDATE ... SET column = value WHERE column IS NULL) - Migration 2: Alter column to required
Editing generated migration SQL
You can edit the generated migration.sql before applying it. This is useful for:
- Adding
DEFAULTvalues Prisma didn’t generate - Adding
UPDATEstatements to backfill data before aSET NOT NULL
-- Example: add column with temp default, backfill, keep as required
ALTER TABLE "profiles" ADD COLUMN "username" TEXT NOT NULL DEFAULT '';
UPDATE "profiles" SET "username" = CONCAT('user_', LEFT("id", 8)) WHERE "username" = '';
The edited SQL is committed to git — it is the migration. Never edit SQL after it’s been applied to any environment.
Production migrations
Migrations run automatically via a Helm pre-upgrade Job (prisma migrate deploy). This means:
- Migration SQL must be correct before merging — no manual intervention during deploy
- Test migrations locally against a database with realistic data
- If a migration fails, the Helm upgrade fails and ArgoCD will not roll forward
See .claude/rules/architecture.md for the full migration safety rules.
Working with Shared Packages
Shared packages are in packages/ and used as workspace dependencies (workspace:* in package.json).
@luckyplans/shared — Types and utilities
Exports:
- Types:
User,ServiceResponse,PaginatedResponse,UserProfileData,ProjectData,SkillData,ExperienceData,PublicProfileData, etc. - Enums:
CoreMessagePattern(Redis message patterns),Proficiency(skill levels) - Utilities:
generateId(),getRedisConfig(),getEnvVar() - Telemetry:
bootstrapTelemetry(),injectTraceContext(),TraceContextExtractor
import { CoreMessagePattern, ServiceResponse, getRedisConfig } from '@luckyplans/shared';
After making changes to shared packages, Turborepo picks them up automatically in dev mode.
@luckyplans/config — ESLint and TypeScript configs
Shared presets that apps extend:
tsconfig.nextjs.json— for Next.js appstsconfig.nestjs.json— for NestJS servicestsconfig.base.json— for shared packageseslint-preset.js— base ESLint rules
Adding a New Microservice
1. Create the service directory
mkdir -p apps/service-<name>/src
2. Copy boilerplate from an existing service
Use service-core as a template.
3. Update package.json
Change the name to @luckyplans/service-<name>.
4. Define message patterns
Add patterns in packages/shared/src/types/index.ts:
export enum NewServiceMessagePattern {
GET_SOMETHING = 'new-service.getSomething',
CREATE_SOMETHING = 'new-service.createSomething',
}
5. Create the service files
src/instrument.ts— OTel SDK bootstrap (must be imported first inmain.ts)src/main.ts— Bootstrap with Redis transport, Pino logger, SIGTERM shutdownsrc/app.module.ts— Module definition withLoggerModuleandTraceContextExtractorsrc/<name>.controller.ts—@MessagePatternhandlerssrc/<name>.service.ts— Business logic (useLoggerfrom@nestjs/common, neverconsole)
Follow the patterns in apps/service-core/src/ — it has all of these set up:
// src/instrument.ts — MUST be the first import in main.ts
import { bootstrapTelemetry } from '@luckyplans/shared';
export const otelSdk = bootstrapTelemetry({ serviceName: 'service-<name>' });
// src/main.ts — import instrument.ts BEFORE anything else
import { otelSdk } from './instrument';
import { NestFactory } from '@nestjs/core';
// ... rest of bootstrap
process.on('SIGTERM', async () => { await otelSdk.shutdown(); });
// src/app.module.ts — include LoggerModule and TraceContextExtractor
import { LoggerModule } from 'nestjs-pino';
import { APP_INTERCEPTOR } from '@nestjs/core';
import { TraceContextExtractor } from '@luckyplans/shared';
// ... add LoggerModule.forRoot({...}) to imports
// ... add { provide: APP_INTERCEPTOR, useClass: TraceContextExtractor } to providers
6. Register in the API Gateway
Create a new module in apps/api-gateway/src/<name>/:
<name>.module.ts— RegisterClientsModulewith Redis transport<name>.resolver.ts— GraphQL resolvers (useinjectTraceContext()when callingClientProxy.send())
Import the module in apps/api-gateway/src/app.module.ts.
// In resolvers — wrap payloads with injectTraceContext() for end-to-end tracing
import { injectTraceContext } from '@luckyplans/shared';
return firstValueFrom(
this.client.send(Pattern.DO_SOMETHING, injectTraceContext({ id })),
);
7. Add a Helm template
Add a Deployment template at infrastructure/helm/luckyplans/templates/service-<name>/deployment.yaml.
8. Install dependencies and test
pnpm install
pnpm build
pnpm dev
All Commands
| Command | Description |
|---|---|
pnpm setup | First-time project setup (deps, .env, build shared pkgs) |
pnpm dev | Start all services with hot reload |
pnpm build | Build all packages and apps |
pnpm lint | Lint all packages |
pnpm type-check | Type-check all packages |
pnpm format | Format all files with Prettier |
pnpm format:check | Check formatting without modifying |
pnpm clean | Remove all build artifacts |
pnpm deploy:local | Full deploy to local k3d cluster (build all + Helm) |
deploy-local.sh web | Rebuild and redeploy a single service |
deploy-local.sh --helm-only | Helm upgrade only (config/secret changes, no builds) |
pnpm deploy:teardown | Destroy the local k3d cluster |
pnpm deploy:status | Show cluster, Helm release, and pod status |
Observability (Local Dev)
The local dev stack includes a full observability pipeline. NestJS services automatically emit traces, metrics, and structured logs via OpenTelemetry.
Grafana
Open http://localhost:3002 (auto-login as admin). Pre-configured datasources:
- Prometheus — metrics from OTel Collector, scraped every 15s
- Loki — structured JSON logs from services (via OTel Collector)
- Tempo — distributed traces with service map
Pre-built dashboards are in the LuckyPlans folder:
- RED Metrics — request rate, error rate, latency (p50/p95/p99) per service
- Infrastructure — OTel Collector throughput, dropped telemetry
Viewing traces
- Make a GraphQL request (e.g., visit http://localhost:3000/dashboard )
- Open Grafana → Explore → select Tempo datasource
- Search by service name (
api-gatewayorservice-core) - Click a trace to see the full span tree: HTTP request → GraphQL resolver → Redis pub/sub → service-core handler
Viewing logs
- Open Grafana → Explore → select Loki datasource
- Query:
{service_name="api-gateway"}or use the Log Browser - Each log line includes
traceIdandspanIdfor correlation — click the TraceID link to jump to the associated trace in Tempo
How instrumentation works
- OTel SDK is bootstrapped in
instrument.ts(first import inmain.ts) — auto-instruments HTTP, Express, ioredis, GraphQL - Pino (
nestjs-pino) outputs structured JSON logs with trace context (traceId,spanId) - Trace propagation across Redis:
injectTraceContext()on the gateway side,TraceContextExtractorinterceptor on the service side - Config:
OTEL_EXPORTER_OTLP_ENDPOINT(defaulthttp://localhost:4317),LOG_LEVEL(defaultdebugin dev)
Troubleshooting
Infrastructure not starting
docker compose ps # check container status
docker compose logs keycloak # check Keycloak logs
Redis connection refused
Make sure infrastructure is running:
docker compose up -d
docker compose exec redis redis-cli ping
# Should return: PONG
Keycloak not ready
Keycloak takes ~30 seconds to start and import the realm. Check status:
docker compose ps keycloak # wait for "healthy"
curl -sf http://localhost:8080/realms/luckyplans # should return realm JSON
Login redirect loop
- Ensure the gateway is running (
pnpm --filter @luckyplans/api-gateway dev) - Check that
SESSION_SECRETis set in.env - Check browser DevTools > Application > Cookies for
session_id
Port already in use
# Linux/Mac
lsof -i :3000
# Windows
netstat -ano | findstr :3000
Turborepo cache issues
pnpm clean
pnpm install
pnpm build
Shared package changes not reflected
pnpm --filter @luckyplans/shared build
pnpm dev
No traces/metrics in Grafana
- Verify OTel Collector is running:
docker compose ps otel-collector - Check collector logs:
docker compose logs otel-collector - Verify
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317is in your.env - Check Prometheus targets: http://localhost:9090/targets —
otel-collectorshould show as UP