How to Develop Locally

Prerequisites

ToolVersionInstall
Node.js>= 20.0.0nodejs.org 
pnpm>= 9.0.0npm install -g pnpm@9.15.4
Docker ComposeLatestdocker.com 

Quick Start

# First time — installs deps, sets up .env, builds shared packages:
pnpm setup
 
# Start infrastructure (Redis, Keycloak, observability stack):
docker compose up -d
 
# Start all services with hot reload:
pnpm dev

Open http://localhost:3000  — Next.js rewrites proxy /auth/* and /graphql to the gateway.

Step-by-Step Setup

1. Install dependencies

pnpm install

2. Set up environment variables

cp .env.example .env

Default values work for local development — including MinIO credentials (MINIO_ACCESS_KEY=minioadmin, MINIO_SECRET_KEY=minioadmin). The only value you must fill in is SESSION_SECRET:

# Generate a session secret:
openssl rand -base64 32

Paste the output into SESSION_SECRET= in your .env file.

3. Start infrastructure

Docker Compose provides the infrastructure and observability services needed for local dev:

docker compose up -d
ContainerPortPurpose
redis6379Inter-service communication + sessions
postgresql-keycloak5433Keycloak database
postgresql-app5434Application database (Prisma / service-core)
keycloak8080Identity provider
minio9000 (API), 9001 (Console)Object storage for file uploads
otel-collector4317OpenTelemetry Collector (gRPC), also 4318 (HTTP), 8889 (Prometheus)
prometheus9090Metrics storage
grafana3002Dashboards (admin/admin)
loki3100Log aggregation
tempo3200Distributed tracing

Verify everything is healthy:

docker compose ps

Wait for Keycloak to show healthy status (takes ~30 seconds on first start).

To stop infrastructure:

docker compose down        # stop containers (PostgreSQL data preserved)
docker compose down -v     # stop and remove volumes (reset all data)

4. Build shared packages and run database migrations

pnpm --filter @luckyplans/shared build
pnpm --filter @luckyplans/prisma build
pnpm --filter @luckyplans/prisma db:migrate:deploy

Note: The postgresql-app container auto-creates the luckyplans database via its POSTGRES_DB env var on first start. If you run into connection issues, try docker compose down -v && docker compose up -d to reset all volumes.

5. Start all services

pnpm dev

Turborepo starts everything in parallel with hot reload:

ServiceInternal PortAccess URLDescription
Frontend3000http://localhost:3000 Next.js app (rewrites proxy to gateway)
API Gateway3001http://localhost:3000/graphql GraphQL (via Next.js rewrite)
service-core— (Redis transport)Core domain microservice

Note: The browser talks to localhost:3000. Next.js rewrites in next.config.ts proxy /auth/* and /graphql to the API Gateway (port 3001). This same-domain routing is what allows the session_id cookie to be shared between the frontend and the gateway.

6. Verify everything works

  1. Open http://localhost:3000  — you should see the landing page
  2. Navigate to any protected route (e.g., /dashboard) — you should be redirected to /login
  3. Register a new account or log in with the test user:
    • Email: testuser@luckyplans.xyz
    • Password: password
  4. After login, you should be redirected back to the app with a valid session

You can also verify the GraphQL endpoint directly:

# Health check (no auth required):
curl http://localhost:3001/graphql -H 'Content-Type: application/json' \
  -d '{"query":"{ health }"}'

7. MinIO console (optional)

Access the MinIO Console at http://localhost:9001 :

  • Username: minioadmin
  • Password: minioadmin

MinIO provides S3-compatible object storage for file uploads (avatars, project images). The luckyplans-uploads bucket is auto-created on first upload.

8. Keycloak admin console (optional)

Access the Keycloak admin console at http://localhost:8080 :

  • Admin user: admin
  • Admin password: admin
  • Realm: luckyplans

The luckyplans realm is auto-imported from infrastructure/keycloak/realm-export.json on first start. It includes the luckyplans-frontend client (ROPC enabled, service account with manage-users role) and a test user. Keycloak data is persisted in PostgreSQL — restarts preserve users and realm configuration.

How Authentication Works Locally

The auth flow uses gateway-managed sessions — no tokens are exposed to the browser:

  1. Browser navigates to a protected route
  2. Next.js middleware checks for a session_id cookie — if absent, redirects to /login
  3. User enters credentials on the custom login page (/login)
  4. Form submits POST /auth/login with { email, password } (proxied to gateway via Next.js rewrite)
  5. Gateway authenticates with Keycloak using ROPC grant, creates a Redis session, sets an HttpOnly session_id cookie
  6. Frontend redirects to the original protected route
  7. All subsequent GraphQL requests include the cookie automatically (credentials: 'include')
  8. The gateway’s SessionGuard validates the session and injects the user into the request context

Registration works similarly: the custom /register page submits to POST /auth/register, which creates the user via Keycloak Admin API, then auto-logs in.

To log out, click the “Log out” button — it sends POST /auth/logout, which clears the session and cookie.

Logging

All NestJS services use Pino (nestjs-pino) for structured JSON logging with automatic OpenTelemetry trace correlation. Every log line includes traceId and spanId so you can jump from a log to its trace in Grafana.

How to log in services

Use the NestJS Logger from @nestjs/common — it routes through Pino automatically:

import { Injectable, Logger } from '@nestjs/common';
 
@Injectable()
export class MyService {
  private readonly logger = new Logger(MyService.name);
 
  doSomething() {
    this.logger.log('Processing request');          // info level
    this.logger.warn('Cache miss, falling back');   // warn level
    this.logger.error('Failed to connect', error);  // error level
    this.logger.debug('Detailed debug info');        // debug level (dev only)
  }
}

What NOT to do

// ❌ Never use console — it bypasses structured logging and trace correlation
console.log('something');
console.warn('something');
console.error('something');
 
// ✅ Always use NestJS Logger
this.logger.log('something');
this.logger.warn('something');
this.logger.error('something', error);

Log output

In development (LOG_LEVEL=debug), Pino pretty-prints with colors:

[10:23:45] INFO (api-gateway): Processing request
    traceId: "abc123..."
    spanId: "def456..."
    req: {"method":"POST","url":"/graphql"}

In production (LOG_LEVEL=info), raw JSON goes to stdout for Loki to collect:

{"level":"info","time":1710000000,"msg":"Processing request","traceId":"abc123...","spanId":"def456...","context":"MyService"}

Trace context in logs

The traceId and spanId are injected automatically by the Pino mixin() configured in AppModule. You don’t need to do anything — just use Logger and trace IDs appear in every log line within an active request.

Working on Specific Services

Run a single service

pnpm --filter @luckyplans/web dev
pnpm --filter @luckyplans/api-gateway dev
pnpm --filter @luckyplans/service-core dev

Infrastructure (Redis, Keycloak) must be running via docker compose up -d regardless of which app services you start.

Build a single service

pnpm --filter @luckyplans/web build
pnpm --filter @luckyplans/api-gateway build

Working with GraphQL

Regenerate frontend types

After modifying a graphql() call in any hook file or updating the schema:

pnpm --filter @luckyplans/web codegen

This reads apps/api-gateway/schema.graphql (single source of truth) and regenerates typed helpers in apps/web/src/generated/.

Schema location

The frontend codegen reads the schema directly from apps/api-gateway/schema.graphql — there is no duplicate schema file in the web app. The gateway auto-generates this file from code-first decorators on startup (dev mode). When the gateway schema changes:

  1. Start the gateway (or run pnpm dev) so it regenerates apps/api-gateway/schema.graphql
  2. Run codegen: pnpm --filter @luckyplans/web codegen
  3. Commit the updated schema and generated types

Working with Prisma Migrations

Prisma schema lives at packages/prisma/prisma/schema.prisma. Migrations are auto-generated SQL files committed to git.

Creating a migration

pnpm --filter @luckyplans/prisma db:migrate:dev -- --name describe_the_change

This generates a SQL file at packages/prisma/prisma/migrations/<timestamp>_<name>/migration.sql, applies it locally, and regenerates the Prisma Client.

Migration safety rules

Prisma generates DDL (schema changes) but never backfills data. Adding a non-nullable column without a default to a table with existing rows will fail.

ScenarioSafe?What to do
New table with required columnsYesNo existing rows — safe
New nullable column (String?) on existing tableYesExisting rows get NULL
New column with @default() on existing tableYesExisting rows get the default
New required column (no ?, no @default) on existing tableNoMigration will fail if table has rows

When you need a required column on an existing table

Use the two-step migration pattern:

  1. Migration 1: Add column as nullable, deploy
  2. Backfill: Update existing rows via script or SQL (UPDATE ... SET column = value WHERE column IS NULL)
  3. Migration 2: Alter column to required

Editing generated migration SQL

You can edit the generated migration.sql before applying it. This is useful for:

  • Adding DEFAULT values Prisma didn’t generate
  • Adding UPDATE statements to backfill data before a SET NOT NULL
-- Example: add column with temp default, backfill, keep as required
ALTER TABLE "profiles" ADD COLUMN "username" TEXT NOT NULL DEFAULT '';
UPDATE "profiles" SET "username" = CONCAT('user_', LEFT("id", 8)) WHERE "username" = '';

The edited SQL is committed to git — it is the migration. Never edit SQL after it’s been applied to any environment.

Production migrations

Migrations run automatically via a Helm pre-upgrade Job (prisma migrate deploy). This means:

  • Migration SQL must be correct before merging — no manual intervention during deploy
  • Test migrations locally against a database with realistic data
  • If a migration fails, the Helm upgrade fails and ArgoCD will not roll forward

See .claude/rules/architecture.md for the full migration safety rules.

Working with Shared Packages

Shared packages are in packages/ and used as workspace dependencies (workspace:* in package.json).

@luckyplans/shared — Types and utilities

Exports:

  • Types: User, ServiceResponse, PaginatedResponse, UserProfileData, ProjectData, SkillData, ExperienceData, PublicProfileData, etc.
  • Enums: CoreMessagePattern (Redis message patterns), Proficiency (skill levels)
  • Utilities: generateId(), getRedisConfig(), getEnvVar()
  • Telemetry: bootstrapTelemetry(), injectTraceContext(), TraceContextExtractor
import { CoreMessagePattern, ServiceResponse, getRedisConfig } from '@luckyplans/shared';

After making changes to shared packages, Turborepo picks them up automatically in dev mode.

@luckyplans/config — ESLint and TypeScript configs

Shared presets that apps extend:

  • tsconfig.nextjs.json — for Next.js apps
  • tsconfig.nestjs.json — for NestJS services
  • tsconfig.base.json — for shared packages
  • eslint-preset.js — base ESLint rules

Adding a New Microservice

1. Create the service directory

mkdir -p apps/service-<name>/src

2. Copy boilerplate from an existing service

Use service-core as a template.

3. Update package.json

Change the name to @luckyplans/service-<name>.

4. Define message patterns

Add patterns in packages/shared/src/types/index.ts:

export enum NewServiceMessagePattern {
  GET_SOMETHING = 'new-service.getSomething',
  CREATE_SOMETHING = 'new-service.createSomething',
}

5. Create the service files

  • src/instrument.ts — OTel SDK bootstrap (must be imported first in main.ts)
  • src/main.ts — Bootstrap with Redis transport, Pino logger, SIGTERM shutdown
  • src/app.module.ts — Module definition with LoggerModule and TraceContextExtractor
  • src/<name>.controller.ts@MessagePattern handlers
  • src/<name>.service.ts — Business logic (use Logger from @nestjs/common, never console)

Follow the patterns in apps/service-core/src/ — it has all of these set up:

// src/instrument.ts — MUST be the first import in main.ts
import { bootstrapTelemetry } from '@luckyplans/shared';
export const otelSdk = bootstrapTelemetry({ serviceName: 'service-<name>' });
// src/main.ts — import instrument.ts BEFORE anything else
import { otelSdk } from './instrument';
import { NestFactory } from '@nestjs/core';
// ... rest of bootstrap
process.on('SIGTERM', async () => { await otelSdk.shutdown(); });
// src/app.module.ts — include LoggerModule and TraceContextExtractor
import { LoggerModule } from 'nestjs-pino';
import { APP_INTERCEPTOR } from '@nestjs/core';
import { TraceContextExtractor } from '@luckyplans/shared';
// ... add LoggerModule.forRoot({...}) to imports
// ... add { provide: APP_INTERCEPTOR, useClass: TraceContextExtractor } to providers

6. Register in the API Gateway

Create a new module in apps/api-gateway/src/<name>/:

  • <name>.module.ts — Register ClientsModule with Redis transport
  • <name>.resolver.ts — GraphQL resolvers (use injectTraceContext() when calling ClientProxy.send())

Import the module in apps/api-gateway/src/app.module.ts.

// In resolvers — wrap payloads with injectTraceContext() for end-to-end tracing
import { injectTraceContext } from '@luckyplans/shared';
 
return firstValueFrom(
  this.client.send(Pattern.DO_SOMETHING, injectTraceContext({ id })),
);

7. Add a Helm template

Add a Deployment template at infrastructure/helm/luckyplans/templates/service-<name>/deployment.yaml.

8. Install dependencies and test

pnpm install
pnpm build
pnpm dev

All Commands

CommandDescription
pnpm setupFirst-time project setup (deps, .env, build shared pkgs)
pnpm devStart all services with hot reload
pnpm buildBuild all packages and apps
pnpm lintLint all packages
pnpm type-checkType-check all packages
pnpm formatFormat all files with Prettier
pnpm format:checkCheck formatting without modifying
pnpm cleanRemove all build artifacts
pnpm deploy:localFull deploy to local k3d cluster (build all + Helm)
deploy-local.sh webRebuild and redeploy a single service
deploy-local.sh --helm-onlyHelm upgrade only (config/secret changes, no builds)
pnpm deploy:teardownDestroy the local k3d cluster
pnpm deploy:statusShow cluster, Helm release, and pod status

Observability (Local Dev)

The local dev stack includes a full observability pipeline. NestJS services automatically emit traces, metrics, and structured logs via OpenTelemetry.

Grafana

Open http://localhost:3002  (auto-login as admin). Pre-configured datasources:

  • Prometheus — metrics from OTel Collector, scraped every 15s
  • Loki — structured JSON logs from services (via OTel Collector)
  • Tempo — distributed traces with service map

Pre-built dashboards are in the LuckyPlans folder:

  • RED Metrics — request rate, error rate, latency (p50/p95/p99) per service
  • Infrastructure — OTel Collector throughput, dropped telemetry

Viewing traces

  1. Make a GraphQL request (e.g., visit http://localhost:3000/dashboard )
  2. Open Grafana → Explore → select Tempo datasource
  3. Search by service name (api-gateway or service-core)
  4. Click a trace to see the full span tree: HTTP request → GraphQL resolver → Redis pub/sub → service-core handler

Viewing logs

  1. Open Grafana → Explore → select Loki datasource
  2. Query: {service_name="api-gateway"} or use the Log Browser
  3. Each log line includes traceId and spanId for correlation — click the TraceID link to jump to the associated trace in Tempo

How instrumentation works

  • OTel SDK is bootstrapped in instrument.ts (first import in main.ts) — auto-instruments HTTP, Express, ioredis, GraphQL
  • Pino (nestjs-pino) outputs structured JSON logs with trace context (traceId, spanId)
  • Trace propagation across Redis: injectTraceContext() on the gateway side, TraceContextExtractor interceptor on the service side
  • Config: OTEL_EXPORTER_OTLP_ENDPOINT (default http://localhost:4317), LOG_LEVEL (default debug in dev)

Troubleshooting

Infrastructure not starting

docker compose ps          # check container status
docker compose logs keycloak  # check Keycloak logs

Redis connection refused

Make sure infrastructure is running:

docker compose up -d
docker compose exec redis redis-cli ping
# Should return: PONG

Keycloak not ready

Keycloak takes ~30 seconds to start and import the realm. Check status:

docker compose ps keycloak   # wait for "healthy"
curl -sf http://localhost:8080/realms/luckyplans  # should return realm JSON

Login redirect loop

  • Ensure the gateway is running (pnpm --filter @luckyplans/api-gateway dev)
  • Check that SESSION_SECRET is set in .env
  • Check browser DevTools > Application > Cookies for session_id

Port already in use

# Linux/Mac
lsof -i :3000
 
# Windows
netstat -ano | findstr :3000

Turborepo cache issues

pnpm clean
pnpm install
pnpm build

Shared package changes not reflected

pnpm --filter @luckyplans/shared build
pnpm dev

No traces/metrics in Grafana

  • Verify OTel Collector is running: docker compose ps otel-collector
  • Check collector logs: docker compose logs otel-collector
  • Verify OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 is in your .env
  • Check Prometheus targets: http://localhost:9090/targets otel-collector should show as UP