Ludo API Node.js — Production Backend with Express, JWT & Docker
A Ludo game backend serving thousands of concurrent players isn't a simple Express app — it's a layered system of authentication middleware, request validation, structured logging, error tracking, background job queues, and Redis-backed pub/sub for real-time state. This guide covers building that system from project initialization through production deployment: a complete middleware stack, JWT authentication with short-lived access tokens and long-lived refresh tokens, Pino-structured logging for production observability, Sentry error tracking, Jest unit and integration tests, Docker Compose for local development, and a production readiness checklist before you flip the switch.
Project Initialization and Dependencies
Start by scaffolding a Node.js project with ES modules (the modern standard) and TypeScript for production reliability. TypeScript catches mismatched types at build time rather than runtime, which is critical for a game backend where incorrect data shapes cause silent failures. The essential dependencies break into four groups: Express and middleware (express, cors, helmet, express-rate-limit), real-time (socket.io), security and auth (jsonwebtoken, bcryptjs, refresh-token), database and cache (ioredis, better-sqlite3 or prisma), and observability (pino, pino-http, @sentry/node).
# Initialize project with ES modules
mkdir ludo-api-server && cd ludo-api-server
npm init -y
npm pkg set type="module"
npm pkg set scripts.dev="tsx watch src/server.ts"
npm pkg set scripts.build="tsc"
npm pkg set scripts.start="node dist/server.js"
npm pkg set scripts.test="NODE_OPTIONS='--experimental-vm-modules' jest"
# Production dependencies
npm install express socket.io jsonwebtoken bcryptjs
npm install ioredis better-sqlite3 express-rate-limit
npm install express-validator uuid cors helmet
npm install pino pino-http @sentry/node
# Development dependencies
npm install --save-dev typescript tsx jest @types/* supertest
Express Middleware Stack
The middleware stack processes every request in order, from the outside in (inbound) and inside out (error handlers). The sequence matters: trust proxies first (so req.ip is correct for rate limiting behind a load balancer), then security headers, CORS, body parsing, rate limiting, request logging, and finally the route handlers. Skipping the order — for instance, parsing the body before rate limiting — causes hard-to-debug behavior when malformed payloads crash parsers before the rate limiter can reject them.
import express from 'express';
import http from 'http';
import { Server } from 'socket.io';
import cors from 'cors';
import helmet from 'helmet';
import rateLimit from 'express-rate-limit';
import pinoHttp from 'pino-http';
import { errorHandler } from './middleware/errorHandler';
import { authenticate } from './middleware/auth';
import { validateRequest } from './middleware/validateRequest';
import { redis } from './lib/redis';
import logger from './lib/logger';
import authRouter from './routes/auth';
import roomsRouter from './routes/rooms';
import gamesRouter from './routes/games';
const app = express();
const server = http.createServer(app);
// Trust proxies first — required for correct IP behind load balancer
app.set('trust proxy', 1);
// Security headers (HSTS, CSP, X-Frame-Options, etc.)
app.use(helmet({
contentSecurityPolicy: false // Disable if serving HTML
}));
// CORS — restrict to your frontend origin in production
app.use(cors({
origin: process.env.CLIENT_URL || 'http://localhost:3000',
credentials: true,
methods: ['GET', 'POST', 'PUT', 'DELETE'],
allowedHeaders: ['Content-Type', 'Authorization', 'X-API-Key', 'Idempotency-Key', 'X-Request-ID']
}));
// Body parsing — limit prevents large payload DoS
app.use(express.json({ limit: '10kb' }));
app.use(express.urlencoded({ extended: true, limit: '10kb' }));
// Structured request/response logging
app.use(pinoHttp({ logger }));
// Global rate limiter — 100 requests / 15 minutes per IP
app.use(rateLimit({
windowMs: 15 * 60 * 1000,
max: 100,
standardHeaders: true,
legacyHeaders: false,
keyGenerator: (req) => req.headers['x-forwarded-for'] as string || req.ip,
handler: (req, res) => {
res.status(429).json({
success: false,
error: { code: 'RATE_LIMIT_EXCEEDED', status: 429, message: 'Too many requests' }
});
}
}));
// Strict route-specific rate limiters
const moveLimiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 30,
keyGenerator: (req) => req.user?.id || req.ip
});
// Routes
app.use('/health', (req, res) => res.json({ status: 'ok', uptime: process.uptime() }));
app.use('/api/auth', authRouter);
app.use('/api/rooms', authenticate, roomsRouter);
app.use('/api/games', authenticate, gamesRouter);
// 404 handler
app.use((req, res) => {
res.status(404).json({
success: false,
error: { code: 'NOT_FOUND', status: 404, message: 'Endpoint not found' }
});
});
// Global error handler — MUST have 4 parameters (err, req, res, next)
app.use(errorHandler);
const PORT = process.env.PORT || 3001;
server.listen(PORT, () => {
logger.info({ port: PORT }, 'Ludo API server started');
});
export { app, server };
JWT Authentication with Access and Refresh Tokens
Access tokens are short-lived (1 hour) so a leaked token has a limited window of abuse. Refresh tokens are long-lived (7 days) and stored server-side in Redis — if a player's account is compromised, you can revoke all their refresh tokens instantly by deleting the Redis key, forcing re-authentication on all devices. The access token carries minimal claims (player ID, name, role) so verification is fast and requires no database lookup. The refresh token is opaque — the server looks it up in Redis to validate it and retrieve the associated player data.
// src/middleware/auth.ts
import { Request, Response, NextFunction } from 'express';
import jwt from 'jsonwebtoken';
import { redis } from '../lib/redis';
import logger from '../lib/logger';
export interface PlayerPayload {
sub: string; // playerId
name: string;
rank: string;
role: 'player' | 'admin';
iat: number;
exp: number;
}
declare module 'express' {
interface Request {
user?: PlayerPayload;
}
}
const JWT_SECRET = process.env.JWT_SECRET!;
export const authenticate = (req: Request, res: Response, next: NextFunction) => {
const authHeader = req.headers.authorization;
if (!authHeader?.startsWith('Bearer ')) {
return res.status(401).json({
success: false,
error: { code: 'UNAUTHORIZED', status: 401, message: 'Missing or malformed Authorization header' }
});
}
const token = authHeader.split(' ')[1];
try {
req.user = jwt.verify(token, JWT_SECRET) as PlayerPayload;
next();
} catch (err) {
if (err instanceof jwt.TokenExpiredError) {
return res.status(401).json({
success: false,
error: { code: 'TOKEN_EXPIRED', status: 401, message: 'Access token expired. Use refresh token to get a new one.' }
});
}
return res.status(401).json({
success: false,
error: { code: 'INVALID_TOKEN', status: 401, message: 'Invalid access token' }
});
}
};
export const requireRole = (role: PlayerPayload['role']) => {
return (req: Request, res: Response, next: NextFunction) => {
if (!req.user || req.user.role !== role) {
return res.status(403).json({
success: false,
error: { code: 'FORBIDDEN', status: 403, message: 'Insufficient permissions for this action' }
});
}
next();
};
};
// src/services/authService.ts
export const generateTokens = async (player: { id: string; name: string; rank: string; role: 'player' | 'admin' }) => {
const ACCESS_TOKEN_TTL = '1h';
const REFRESH_TOKEN_TTL = 7 * 24 * 60 * 60; // 7 days in seconds
const accessToken = jwt.sign(
{ sub: player.id, name: player.name, rank: player.rank, role: player.role },
JWT_SECRET,
{ expiresIn: ACCESS_TOKEN_TTL, issuer: 'ludokingapi.site' }
);
const refreshToken = require('uuid').v4();
await redis.setEx(`refresh:{player.id}:{refreshToken}`, REFRESH_TOKEN_TTL, JSON.stringify({
playerId: player.id, issuedAt: Date.now()
}));
logger.info({ playerId: player.id }, 'Tokens issued');
return { accessToken, refreshToken, expiresIn: 3600 };
};
export const refreshAccessToken = async (playerId: string, refreshToken: string) => {
const stored = await redis.get(`refresh:{playerId}:{refreshToken}`);
if (!stored) {
throw new Error('Invalid or expired refresh token');
}
const playerData = JSON.parse(stored);
const player = await db.getPlayer(playerData.playerId);
if (!player) throw new Error('Player not found');
return generateTokens(player);
};
export const revokeRefreshTokens = async (playerId: string) => {
const keys = await redis.keys(`refresh:{playerId}:*`);
if (keys.length > 0) await redis.del(keys);
logger.info({ playerId, revokedCount: keys.length }, 'All refresh tokens revoked');
};
Structured Logging with Pino
Pino produces JSON logs by default — machine-readable, easily ingested by Logstash/Splunk/Datadog, and fast (5x faster than alternatives like Winston). Each log entry includes level, time, pid, hostname, and a msg field. Add structured context fields for queryability: log the player ID on every auth event, the game ID on every move, the room code on every room event. You can then filter logs in your aggregator by playerId=plr_xyz or gameId=gme_abc without parsing free text.
// src/lib/logger.ts
import pino from 'pino';
const isProduction = process.env.NODE_ENV === 'production';
const logger = pino.default({
level: process.env.LOG_LEVEL || 'info',
formatters: {
level: (label) => ({ level: label }),
bindings: () => ({
service: 'ludo-api-server',
version: process.env.npm_package_version
})
},
timestamp: pino.stdTimeFunctions.isoTime,
// In production: JSON to stdout for container log collection
// In development: pretty-print for human readability
transport: isProduction ? undefined : {
target: 'pino-pretty',
options: { colorize: true, translateTime: 'SYS:standard' }
}
});
// Game-specific logger helpers
logger.child.prototype.game = function(gameId: string, roomCode: string) {
return this.child({ gameId, roomCode });
};
logger.child.prototype.player = function(playerId: string) {
return this.child({ playerId });
};
export default logger;
// Usage examples:
// logger.info({ gameId, roomCode, playerId }, 'Player joined room');
// logger.warn({ gameId, reason: 'turn_timeout' }, 'Auto-forfeiting turn');
// logger.error({ gameId, error: err.message, stack: err.stack }, 'Game engine crashed');
Error Handling Middleware with Sentry
Express error handlers catch all thrown errors and rejected promises. The global error handler here distinguishes between operational errors (expected: validation failures, not-found, unauthorized — return a 4xx with a clear message) and programmer errors (unexpected: null pointer, database corruption — return 500 and trigger Sentry so you get a full stack trace with request context). This distinction is critical: operational errors don't need Sentry alerts (they're expected flow control), but programmer errors always need investigation.
// src/middleware/errorHandler.ts
import { Request, Response, NextFunction } from 'express';
import * as Sentry from '@sentry/node';
import logger from '../lib/logger';
export class AppError extends Error {
constructor(
public code: string,
public statusCode: number,
message: string,
public details?: Record<string, unknown>[]
) {
super(message);
this.name = 'AppError';
}
}
export const errorHandler = (
err: Error | AppError,
req: Request,
res: Response,
_next: NextFunction
) => {
const requestId = (req.headers['x-request-id'] as string) || require('uuid').v4();
const context = {
requestId,
method: req.method,
url: req.url,
playerId: req.user?.sub,
ip: req.ip,
userAgent: req.headers['user-agent']
};
if (err instanceof AppError) {
// Operational error — expected, user-facing, no Sentry alert
logger.warn({ ...context, err: err.message, code: err.code }, 'Operational error');
return res.status(err.statusCode).json({
success: false,
error: {
code: err.code,
status: err.statusCode,
message: err.message,
details: err.details,
requestId
}
});
}
// Programmer error — unexpected, log + Sentry + generic response
logger.error({ ...context, stack: err.stack }, 'Unhandled error');
Sentry.captureException(err, { extra: context });
return res.status(500).json({
success: false,
error: {
code: 'INTERNAL_ERROR',
status: 500,
message: 'An unexpected error occurred. Our team has been notified.',
requestId
}
});
};
// Async route wrapper — eliminates try/catch in every route handler
export const asyncHandler = (fn: (req: Request, res: Response, next: NextFunction) => Promise<unknown>) => {
return (req: Request, res: Response, next: NextFunction) => {
Promise.resolve(fn(req, res, next)).catch(next);
};
};
Jest Testing with Supertest
Unit tests cover individual functions — game engine logic, token generation, validation schemas. Integration tests cover the full request/response cycle: make an HTTP request to the Express app (via Supertest), verify the status code, response body, and database side effects. Never mock the game engine in integration tests — the whole point is to catch interaction bugs between components. Use a test database (SQLite in-memory or a dedicated test PostgreSQL instance) and a mock Redis client for isolation.
// __tests__/games.integration.test.ts
import { describe, test, expect, beforeAll, afterAll, beforeEach } from '@jest/globals';
import request from 'supertest';
import { app } from '../src/server';
import { db } from '../src/lib/db';
import { generateTokens } from '../src/services/authService';
describe('Games API', () => {
let accessToken: string;
let playerId: string;
beforeAll(async () => {
const player = await db.createPlayer({ name: 'TestPlayer', rank: 'Bronze' });
playerId = player.id;
const tokens = await generateTokens(player);
accessToken = tokens.accessToken;
});
test('POST /api/games creates a new game room', async () => {
const response = await request(app)
.post('/api/games')
.set('Authorization', `Bearer {accessToken}`)
.send({ mode: 'classic', maxPlayers: 4, turnTimeLimit: 30 });
expect(response.status).toBe(201);
expect(response.body.success).toBe(true);
expect(response.body.data.roomCode).toBeDefined();
expect(response.body.data.status).toBe('waiting');
});
test('POST /api/games/:id/moves rejects invalid dice value', async () => {
const game = await db.createGame({ hostId: playerId, mode: 'classic' });
const response = await request(app)
.post(`/api/games/{game.id}/moves`)
.set('Authorization', `Bearer {accessToken}`)
.send({ pieceId: 'piece_1', targetPosition: 5, diceValue: 7 }); // Invalid: max 6
expect(response.status).toBe(422);
expect(response.body.success).toBe(false);
expect(response.body.error.code).toBe('VALIDATION_ERROR');
});
test('Returns 401 when token is missing', async () => {
const response = await request(app).get('/api/games/any-id');
expect(response.status).toBe(401);
expect(response.body.error.code).toBe('UNAUTHORIZED');
});
afterAll(async () => {
await db.close();
});
});
// jest.config.ts
export default {
testEnvironment: 'node',
transform: { '^.+\\.tsx?$': 'tsx' },
testMatch: ['**/__tests__/**/*.test.ts'],
collectCoverageFrom: ['src/**/*.ts', '!src/**/*.d.ts'],
coverageThreshold: { global: { branches: 70, functions: 70, lines: 70 } }
};
Docker Compose for Local Development
Docker Compose lets you spin up the entire stack — Express API, Redis, PostgreSQL, and a Redis Commander UI for debugging — with a single command. The docker-compose.dev.yml mounts the source directory as a volume so code changes are reflected immediately without rebuilding images. Use non-root database users, health checks for service dependencies, and resource limits to catch memory leaks early.
# docker-compose.dev.yml
version: '3.9'
services:
# Main API server
api:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "3001:3001"
environment:
NODE_ENV: development
DATABASE_URL: postgresql://ludo:ludo123@postgres:5432/ludo_dev
REDIS_URL: redis://redis:6379
JWT_SECRET: dev-secret-change-in-production
CLIENT_URL: http://localhost:3000
volumes:
- .:/app
- /app/node_modules
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
command: "npm run dev"
restart: "no"
# PostgreSQL database
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: ludo_dev
POSTGRES_USER: ludo
POSTGRES_PASSWORD: ludo123
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
- ./scripts/init-db.sql:/docker-entrypoint-initdb.d/init.sql
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ludo"]
interval: 5s
timeout: 5s
retries: 5
# Redis for sessions, cache, and pub/sub
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
command: redis-server --appendonly yes
healthcheck:
test: ["CMD", "redis-cli ping"]
interval: 5s
timeout: 3s
retries: 5
# Redis Commander — web UI for Redis inspection
redis-commander:
image: rediscommander/redis-commander:latest
environment:
REDIS_HOSTS: local:redis:6379
ports:
- "8081:8081"
depends_on:
- redis
volumes:
postgres_data:
redis_data:
# Dockerfile.dev
# FROM node:20-alpine
# WORKDIR /app
# COPY package*.json ./
# RUN npm install
# COPY . .
# EXPOSE 3001
Run the stack with docker compose -f docker-compose.dev.yml up and tear it down with docker compose -f docker-compose.dev.yml down -v. The -v flag destroys persistent volumes, giving you a clean database on every fresh start. See the Docker deployment guide for production Docker configurations with multi-stage builds, non-root containers, and health check endpoints.
Production Deployment Checklist
Before going live, work through this checklist systematically. Each item represents a real incident that has taken down game APIs in production.
Environment & Secrets
- JWT_SECRET — Use
openssl rand -base64 64to generate a cryptographically random value. Never commit it to git or use a default. - DATABASE_URL — Use SSL connections to PostgreSQL in production. Set
?sslmode=requirein the connection string. - REDIS_URL — Use Redis AUTH and TLS in production. Redis 7+ supports ACL users for fine-grained permissions.
- NODE_ENV=production — Enables Express optimizations and disables development-only middleware.
- SENTRY_DSN — Configure Sentry with the DSN so errors are captured before deployment.
Security Hardening
- Helmet middleware configured with strict CSP, HSTS (max-age 1 year), and X-Frame-Options DENY.
- Rate limiting enabled on all endpoints — global, per-route, and per-user tiers.
- API keys hashed with bcrypt (cost factor 12) before storage.
- JWT access token expiry set to 1 hour or less — 15 minutes for high-security operations.
- Refresh tokens stored in Redis with a 7-day TTL and revocation capability.
- CORS restricted to exact frontend origin, not
*. - Request body size limited to 10kb to prevent large payload attacks.
- Database queries use parameterized statements — never string concatenation with user input.
Observability
- Pino logging configured with JSON output in production.
- All log entries include
requestIdfor distributed trace correlation. - Sentry initialized before all other middleware for complete error capture.
- Health endpoint returns database and Redis connectivity status, not just "ok".
- Metrics endpoint (/metrics) exports request latency histograms, error rates, and active Socket.IO connections in Prometheus format.
- Structured logs piped to stdout — let the container runtime (Docker, Kubernetes) handle log aggregation.
Database & Caching
- All tables have appropriate indexes — particularly on
player_id,game_id,room_code, and composite indexes for cursor pagination queries. - Connection pool sized appropriately:
pool.min=2, pool.max=20for PostgreSQL. - Redis used for: rate limit counters, session data, refresh tokens, idempotency records, Socket.IO adapter pub/sub, and game state cache for completed games.
- Database migrations tested in staging before applying to production.
Infrastructure
- Graceful shutdown handling: drain Socket.IO connections, finish in-flight requests, close database pools before terminating. Node.js doesn't do this by default — implement it explicitly.
- Process manager: use PM2 or Kubernetes with proper restart policies. Don't rely on
node server.js &. - Horizontal scaling with Redis adapter for Socket.IO — events must fan out to all instances.
- Sticky sessions configured at the load balancer if not using the Redis adapter.
- CDN in front of the API for leaderboard endpoints and static assets with appropriate cache headers.
How It Works: The Request Lifecycle in Node.js
A player's move starts as an HTTP POST from the mobile app. The request hits the Express middleware stack in sequence: trust proxy normalization, Helmet security headers, CORS validation, body parsing with the 10kb limit, Pino HTTP logging, the global rate limiter (checking Redis), and finally the route handler. The route handler's authentication middleware decodes the JWT Bearer token and attaches req.user. The validation middleware (express-validator) checks the request body against the schema — if it fails, AppError is thrown with a VALIDATION_ERROR code and 422 status.
If validation passes, the game engine processes the move: it acquires a Redis distributed lock on the game to prevent race conditions when two moves arrive simultaneously, validates the turn order, applies the move, updates the database, publishes the event to the Socket.IO adapter for cross-instance broadcast, and releases the lock. The route handler returns 201 with the move details. If any step throws an error, the asyncHandler wrapper catches the rejection and passes it to errorHandler, which logs it (and sends it to Sentry if it's a programmer error) before returning the structured JSON error response.
Common Mistakes
Missing async error handling. Express route handlers that don't await anything are fine, but any async function that throws will silently swallow the error and hang the request. Use the asyncHandler wrapper on every async route — it wraps Promise.resolve(fn(req, res, next)).catch(next) so thrown errors reach the error middleware.
No graceful shutdown. When Kubernetes sends SIGTERM to your container, Node.js doesn't automatically close connections or flush logs. Implement a shutdown handler that stops accepting new connections, waits for in-flight requests to complete (with a 30-second timeout), closes Redis and database connections, and exits cleanly. Otherwise, players mid-game get abruptly disconnected with no cleanup.
Logging sensitive data. Pino's structured logging makes it easy to accidentally log JWT tokens, passwords, or player personal data. Use a redaction function: pino({ redact: ['req.headers.authorization', 'body.password'] }) to strip sensitive fields before they reach the log output.
Single Redis instance in production. A single Redis instance is a single point of failure. Use Redis Sentinel for automatic failover or Redis Cluster for horizontal sharding. For Socket.IO pub/sub specifically, even a brief Redis outage causes players on different server instances to stop seeing each other's moves.
Skipping database transaction isolation. When a game ends, multiple writes happen atomically: updating player ranks, recording the game in history, awarding points. Wrap these in a PostgreSQL transaction with SERIALIZABLE or READ COMMITTED isolation. Without transactions, a server crash between writes leaves the database in an inconsistent state.
Frequently Asked Questions
Refresh tokens are opaque, long-lived tokens (7 days) that can be exchanged for new short-lived access tokens (1 hour). Storing them in Redis means you can revoke them instantly — if a player's device is stolen, delete the refresh token from Redis and the attacker can't get new access tokens even if they have the old refresh token. If refresh tokens were stateless JWTs, you'd have to wait for them to expire. Redis also gives you a natural TTL (they auto-delete after 7 days) and lets you track how many devices a player is logged into.
On disconnect, store the player's session state in Redis (current room, last action timestamp) and start a grace period timer (30 seconds). Don't immediately forfeit their turn or remove them from the room. On reconnect, the client sends the stored session token — if it exists and the grace period hasn't expired, restore the player to their seat and emit a player:reconnected event to the room. If the grace period expires, mark them as "away" and set a timer for their turn. If they still don't reconnect by their turn, auto-forfeit and notify the room. See the real-time API guide for the full reconnection protocol.
Node.js game servers are prone to three common leak patterns. First, Socket.IO event listeners that aren't cleaned up when players leave — always call socket.removeAllListeners() on disconnect. Second, Redis subscriptions that accumulate — unsubscribe from channels when a game ends. Third, in-memory game state objects that reference event emitters — clear the game state from memory, not just from the database, when a game completes. Run --expose-gc and call global.gc() periodically in staging to detect leaks before they hit production. Set a max heap size with --max-old-space-size=1536 and let the process crash and restart (via PM2/Kubernetes) if it exceeds that limit — a restart is far better than an OOM crash mid-game.
TypeScript for production. Game backends deal with complex data structures — game state, player profiles, move payloads, WebSocket events — where a field rename in one file can silently break another. TypeScript catches these at compile time. The initial setup cost is small: a tsconfig.json, @types/* packages, and optionally a migration path where you add types incrementally rather than rewriting everything. The compile step also runs as part of your CI/CD pipeline, catching type errors before they reach staging.
Server-authoritative game logic is non-negotiable for competitive games. The client can render and predict, but every move is validated server-side before being applied. The validation sequence: check if it's the player's turn (from the authoritative game state in Redis), generate the dice roll server-side (never trust the client's dice value — Math.floor(Math.random() * 6) + 1 on the server, stored in the game state), compute the target position from the piece ID and dice value, check if the path is clear (no friendly pieces blocking at the target), check if a capture is possible, and only then apply the move and broadcast. A client that sends a move with an incorrect dice value gets a 400 response. See the REST API architecture guide for the full error schema.
For a quick production deployment, use a managed platform that handles Docker containers, Redis, and PostgreSQL out of the box. Set up a multi-stage Dockerfile: a build stage with node:20-alpine compiles TypeScript, and a lean runtime stage runs the compiled JavaScript with a non-root user. Configure environment variables for secrets (never bake them into the image), set NODE_ENV=production, and use a health check endpoint (GET /health) that your orchestrator can query. If you need pre-configured infrastructure, reach out via WhatsApp for managed Ludo backend hosting with everything pre-configured.
Get a Production-Ready Node.js Ludo Backend
Full Node.js backend with Express, Socket.IO, JWT auth, Redis, Pino logging, Jest tests, and Docker deployment — pre-configured and ready to ship.
Chat on WhatsApp