PostgreSQL + Redis Database Schema for Ludo Games
Building a production Ludo game platform requires a dual-database strategy: PostgreSQL as the authoritative, ACID-compliant store for player data, game history, and financial transactions; Redis for low-latency game state caching, real-time leaderboards, session affinity, and pub/sub-based event broadcasting across horizontally-scaled game servers. This guide delivers a complete, production-ready schema covering every entity in a Ludo platform — from players and games to tournaments and leaderboards — with constraints, indexes, partitioning, migrations, and Redis patterns that handle thousands of concurrent matches.
Architecture Overview
The Ludo game database architecture separates concerns across three layers:
- PostgreSQL (persistent layer) — Player accounts, game sessions, move audit logs, ELO ratings, tournament brackets, room metadata, and leaderboard snapshots. All writes flow here. ACID guarantees ensure consistent game outcomes and rating calculations.
- Redis (hot layer) — Live game state cache (JSON), real-time leaderboards (sorted sets), player session affinity, matchmaking queues, pub/sub channels for cross-server event broadcasting, and rate-limiting counters.
- Object Storage (cold layer) — Archived move logs beyond 90 days, compressed game replay blobs (Parquet format), and CDN-served static assets. PostgreSQL references URLs in object storage rather than storing binary blobs.
Every design decision below follows this principle: PostgreSQL is the source of truth, Redis is the performance accelerator, and the two are kept in sync through a write-through cache pattern on game completion.
Players Table — Account and Profile Data
The players table is the root entity of the entire system. Every other
table references it through foreign keys. The schema enforces referential integrity at the database level
and uses appropriate indexes for the most common access patterns: lookup by username or email for login,
range scans on rating for matchmaking, and sort by last_seen for friend lists.
CREATE TABLE players (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
username VARCHAR(32) NOT NULL UNIQUE,
email VARCHAR(255) NOT NULL UNIQUE,
password_hash VARCHAR(255) NOT NULL,
display_name VARCHAR(64) NOT NULL DEFAULT username,
avatar_url TEXT,
bio VARCHAR(500) DEFAULT ' ',
-- Statistics counters (denormalized for O(1) reads)
total_games INTEGER NOT NULL DEFAULT 0 CHECK (total_games >= 0),
wins INTEGER NOT NULL DEFAULT 0 CHECK (wins >= 0),
losses INTEGER NOT NULL DEFAULT 0 CHECK (losses >= 0),
draws INTEGER NOT NULL DEFAULT 0 CHECK (draws >= 0),
-- ELO rating (starts at 1000, updated per game)
rating INTEGER NOT NULL DEFAULT 1000 CHECK (rating >= 100 AND rating <= 5000),
peak_rating INTEGER NOT NULL DEFAULT 1000,
rank_tier VARCHAR(16) NOT NULL DEFAULT 'Bronze'
CHECK (rank_tier IN ('Bronze', 'Silver', 'Gold', 'Platinum', 'Diamond', 'Master')),
-- Tokens earned in tournaments / in-app currency
coins BIGINT NOT NULL DEFAULT 1000 CHECK (coins >= 0),
gems INTEGER NOT NULL DEFAULT 0 CHECK (gems >= 0),
-- Account lifecycle
status VARCHAR(16) NOT NULL DEFAULT 'active'
CHECK (status IN ('active', 'suspended', 'banned', 'deleted')),
email_verified BOOLEAN NOT NULL DEFAULT FALSE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
last_seen_at TIMESTAMPTZ,
deleted_at TIMESTAMPTZ
);
-- Performance indexes
CREATE INDEX idx_players_rating ON players (rating DESC);
CREATE INDEX idx_players_username ON players (LOWER(username));
CREATE INDEX idx_players_last_seen ON players (last_seen_at DESC);
CREATE INDEX idx_players_rank_tier ON players (rank_tier, rating DESC);
CREATE INDEX idx_players_status ON players (status) WHERE status != 'active';
-- Auto-update updated_at
CREATE OR REPLACE FUNCTION update_updated_at()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = NOW();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_players_updated_at
BEFORE UPDATE ON players
FOR EACH ROW EXECUTE FUNCTION update_updated_at();
-- Keep wins + losses + draws in sync with total_games
CREATE OR REPLACE FUNCTION validate_games_count()
RETURNS TRIGGER AS $$
BEGIN
IF (NEW.total_games != NEW.wins + NEW.losses + NEW.draws) THEN
RAISE EXCEPTION 'total_games must equal wins + losses + draws';
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_players_validate_counts
BEFORE UPDATE ON players
FOR EACH ROW EXECUTE FUNCTION validate_games_count();
Rooms Table — Private Matchmaking
Private rooms let players create invite-only game sessions with a shareable room code. The room lifecycle
goes from waiting to filling to
ready to in_progress to completed.
A scheduled job (pg_cron) cleans up rooms left in waiting or
filling state for more than 30 minutes.
CREATE TABLE rooms (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
room_code VARCHAR(8) NOT NULL UNIQUE,
host_player_id UUID NOT NULL REFERENCES players(id),
-- Room configuration
game_variant VARCHAR(32) NOT NULL DEFAULT 'classic'
CHECK (game_variant IN ('classic', 'quick', 'tournament', 'custom')),
max_players INTEGER NOT NULL DEFAULT 4
CHECK (max_players IN (2, 3, 4)),
turn_time_limit INTEGER NOT NULL DEFAULT 30,
entry_fee BIGINT NOT NULL DEFAULT 0,
prize_pool BIGINT NOT NULL DEFAULT 0,
-- Lifecycle state
status VARCHAR(16) NOT NULL DEFAULT 'waiting'
CHECK (status IN ('waiting', 'filling', 'ready',
'in_progress', 'completed', 'cancelled', 'expired')),
game_id UUID REFERENCES games(id),
current_players INTEGER NOT NULL DEFAULT 1
CHECK (current_players >= 1 AND current_players <= max_players),
password_hash VARCHAR(255),
expires_at TIMESTAMPTZ NOT NULL DEFAULT (NOW() + INTERVAL '30 minutes'),
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_rooms_code ON rooms (room_code);
CREATE INDEX idx_rooms_host ON rooms (host_player_id);
CREATE INDEX idx_rooms_status ON rooms (status) WHERE status IN ('waiting', 'filling');
CREATE INDEX idx_rooms_expires ON rooms (expires_at) WHERE status NOT IN ('completed', 'cancelled');
CREATE INDEX idx_rooms_variant ON rooms (game_variant, status) WHERE status = 'waiting';
Games Table — Session Management
The games table records every game session. The variant column supports
multiple Ludo modes: classic (standard 52-square track + home columns),
quick (shortened board), and timed (time-limited rounds).
The game_players join table resolves the many-to-many relationship because a player
participates in many games and a game has 2–4 players.
CREATE TABLE games (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
room_id UUID REFERENCES rooms(id),
tournament_id UUID REFERENCES tournaments(id),
variant VARCHAR(32) NOT NULL DEFAULT 'classic',
board_size INTEGER NOT NULL DEFAULT 52,
max_players INTEGER NOT NULL DEFAULT 4
CHECK (max_players IN (2, 3, 4)),
status VARCHAR(16) NOT NULL DEFAULT 'created'
CHECK (status IN ('created', 'waiting', 'in_progress',
'paused', 'finished', 'abandoned', 'draw')),
winner_id UUID REFERENCES players(id),
started_at TIMESTAMPTZ,
finished_at TIMESTAMPTZ,
duration_secs INTEGER GENERATED ALWAYS AS (
EXTRACT(EPOCH FROM (finished_at - started_at))
) STORED,
turn_count INTEGER NOT NULL DEFAULT 0,
current_turn INTEGER DEFAULT 0,
metadata JSONB NOT NULL DEFAULT '{}',
seed BIGINT,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_games_room ON games (room_id);
CREATE INDEX idx_games_tournament ON games (tournament_id);
CREATE INDEX idx_games_status ON games (status);
CREATE INDEX idx_games_winner ON games (winner_id) WHERE winner_id IS NOT NULL;
CREATE INDEX idx_games_created ON games (created_at DESC);
CREATE INDEX idx_games_duration ON games (duration_secs DESC) WHERE status = 'finished';
CREATE INDEX idx_games_variant ON games (variant, status, created_at DESC);
-- game_players: join table for players in a game
CREATE TABLE game_players (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
game_id UUID NOT NULL REFERENCES games(id) ON DELETE CASCADE,
player_id UUID NOT NULL REFERENCES players(id) ON DELETE CASCADE,
player_color VARCHAR(16) NOT NULL
CHECK (player_color IN ('red', 'green', 'yellow', 'blue')),
join_order INTEGER NOT NULL,
final_position INTEGER,
score INTEGER NOT NULL DEFAULT 0,
is_winner BOOLEAN NOT NULL DEFAULT FALSE,
tokens_finished INTEGER NOT NULL DEFAULT 0 CHECK (tokens_finished BETWEEN 0 AND 4),
joined_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
disconnected_at TIMESTAMPTZ,
is_afk BOOLEAN NOT NULL DEFAULT FALSE,
UNIQUE (game_id, player_id),
UNIQUE (game_id, player_color)
);
CREATE INDEX idx_game_players_game ON game_players (game_id);
CREATE INDEX idx_game_players_player ON game_players (player_id, joined_at DESC);
CREATE INDEX idx_game_players_color ON game_players (game_id, player_color);
Moves Table — Complete Audit Log
The moves table is the highest-volume table in the system. A single
competitive Ludo game generates 300–600 move records. At 10,000 concurrent games with 400 moves each, that's
4 million rows being written per hour. This table is the primary candidate for range partitioning and
aggressive archival.
Position encoding: -1 = home base (safe), 0–51 = main track,
53–56 = home column (finish stretch), 57 = finished.
A token reaching 57 on the home column after the exact roll is marked finished.
CREATE TABLE moves (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
game_id UUID NOT NULL REFERENCES games(id) ON DELETE CASCADE,
player_id UUID NOT NULL REFERENCES players(id),
turn_number INTEGER NOT NULL,
phase VARCHAR(16) NOT NULL DEFAULT 'move'
CHECK (phase IN ('roll', 'move', 'capture', 'extra_turn')),
dice_value INTEGER CHECK (dice_value BETWEEN 1 AND 6),
dice_rolls JSONB DEFAULT '[]',
got_six BOOLEAN NOT NULL DEFAULT FALSE,
token_id INTEGER NOT NULL CHECK (token_id BETWEEN 0 AND 3),
from_position INTEGER NOT NULL,
to_position INTEGER NOT NULL,
move_type VARCHAR(24) NOT NULL
CHECK (move_type IN ('home_run', 'move', 'capture',
'safe_move', 'star_jump', 'moon_jump',
'finished', 'skipped', 'no_valid_move')),
captured_token_id UUID,
captured_player_id UUID,
earned_extra_turn BOOLEAN NOT NULL DEFAULT FALSE,
game_state_snapshot JSONB NOT NULL DEFAULT '{}',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Moves table will be partitioned — see Partitioning Strategy section below
Tournaments Table — Bracket Management
Tournaments require a dedicated schema for bracket management, match scheduling, and prize distribution.
The tournaments table stores tournament metadata, while
tournament_participants tracks enrollment and final rankings.
CREATE TABLE tournaments (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name VARCHAR(128) NOT NULL,
description TEXT,
format VARCHAR(32) NOT NULL
CHECK (format IN ('single_elimination', 'double_elimination',
'round_robin', 'swiss_system')),
max_participants INTEGER NOT NULL
CHECK (max_participants IN (4, 8, 16, 32, 64, 128)),
entry_fee BIGINT NOT NULL DEFAULT 0,
prize_pool BIGINT NOT NULL DEFAULT 0,
status VARCHAR(16) NOT NULL DEFAULT 'draft'
CHECK (status IN ('draft', 'registration_open', 'registration_closed',
'in_progress', 'completed', 'cancelled')),
starts_at TIMESTAMPTZ NOT NULL,
ends_at TIMESTAMPTZ,
check_in_opens_at TIMESTAMPTZ NOT NULL DEFAULT (starts_at - INTERVAL '15 minutes'),
prize_distribution JSONB NOT NULL DEFAULT '[]',
rules JSONB NOT NULL DEFAULT '{}',
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_tournaments_status ON tournaments (status, starts_at ASC);
CREATE INDEX idx_tournaments_format ON tournaments (format, status);
CREATE TABLE tournament_participants (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tournament_id UUID NOT NULL REFERENCES tournaments(id) ON DELETE CASCADE,
player_id UUID NOT NULL REFERENCES players(id) ON DELETE CASCADE,
seed_number INTEGER,
bracket_position INTEGER,
checked_in BOOLEAN NOT NULL DEFAULT FALSE,
checked_in_at TIMESTAMPTZ,
wins INTEGER NOT NULL DEFAULT 0,
losses INTEGER NOT NULL DEFAULT 0,
final_rank INTEGER,
UNIQUE (tournament_id, player_id)
);
CREATE INDEX idx_tp_tournament ON tournament_participants (tournament_id);
CREATE INDEX idx_tp_player ON tournament_participants (player_id);
CREATE INDEX idx_tp_rank ON tournament_participants (tournament_id, final_rank)
WHERE final_rank IS NOT NULL;
ELO Rating System — Rating History
The player_ratings table maintains a complete rating history per player,
enabling audit trails, anti-cheat analysis (rating volatility patterns), and historical leaderboard
reconstruction. The ELO formula with K-factor adjustment ensures fair matchmaking over time.
CREATE TABLE player_ratings (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
player_id UUID NOT NULL REFERENCES players(id) ON DELETE CASCADE,
game_id UUID REFERENCES games(id) ON DELETE SET NULL,
tournament_id UUID REFERENCES tournaments(id) ON DELETE SET NULL,
variant VARCHAR(32) NOT NULL DEFAULT 'classic',
rating_before INTEGER NOT NULL,
rating_after INTEGER NOT NULL,
rating_change INTEGER NOT NULL,
k_factor INTEGER NOT NULL,
expected_score NUMERIC(4,3) NOT NULL,
actual_score NUMERIC(4,3) NOT NULL,
match_result VARCHAR(8) NOT NULL
CHECK (match_result IN ('win', 'loss', 'draw')),
opponent_count INTEGER NOT NULL,
avg_opponent_rating INTEGER NOT NULL,
volatility NUMERIC(6,4),
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_ratings_player ON player_ratings (player_id, created_at DESC);
CREATE INDEX idx_ratings_variant ON player_ratings (variant, rating_after DESC);
CREATE INDEX idx_ratings_game ON player_ratings (game_id) WHERE game_id IS NOT NULL;
CREATE INDEX idx_ratings_tournament ON player_ratings (tournament_id, rating_after DESC)
WHERE tournament_id IS NOT NULL;
-- Materialized view: current ratings snapshot (refreshed every 5 minutes)
CREATE MATERIALIZED VIEW mv_current_ratings AS
SELECT DISTINCT ON (player_id)
player_id,
rating_after AS current_rating,
variant,
created_at
FROM player_ratings
ORDER BY player_id, created_at DESC;
CREATE UNIQUE INDEX idx_mv_ratings_player ON mv_current_ratings (player_id, variant);
CREATE INDEX idx_mv_ratings_rating ON mv_current_ratings (current_rating DESC);
Redis Patterns for Ludo Game State
Redis serves three critical roles in the Ludo game stack: state cache for sub-10ms game state reads, pub/sub for broadcasting moves across server instances, and sorted sets for real-time leaderboards. Every Redis pattern below complements the PostgreSQL schema — PostgreSQL writes, Redis reads (with async sync).
Live Game State Cache
Game state is serialized as JSON and stored in Redis with a TTL. Every move write goes to PostgreSQL first (ACID guarantee), then asynchronously updates the Redis cache. On game end, the final state is persisted to PostgreSQL and the Redis key is deleted.
const Redis = require('ioredis');
const redis = new Redis({ host: process.env.REDIS_HOST, port: 6379, lazyConnect: true });
// ── Live game state ──────────────────────────────────────
// Key: ludo:game:{gameId}
// Type: String (JSON) | TTL: 3600s (refreshed on every move)
async function setGameState(gameId, state, ttl = 3600) {
await redis.setex(`ludo:game:${gameId}`, ttl, JSON.stringify(state));
}
async function getGameState(gameId) {
const raw = await redis.get(`ludo:game:${gameId}`);
return raw ? JSON.parse(raw) : null;
}
async function updateTokenPosition(gameId, playerColor, tokenId, newPosition) {
const key = `ludo:game:${gameId}`;
const [state, ttl] = await Promise.all([redis.get(key), redis.ttl(key)]);
if (!state) return null;
const game = JSON.parse(state);
const player = game.players.find(p => p.color === playerColor);
if (!player) return null;
player.tokens[tokenId].position = newPosition;
await redis.setex(key, Math.max(ttl, 3600), JSON.stringify(game));
return game;
}
// ── Player session affinity ───────────────────────────────
// Key: ludo:player:{playerId}:server | Type: String
async function bindPlayerToServer(playerId, serverId, ttlSeconds = 300) {
await redis.setex(`ludo:player:${playerId}:server`, ttlSeconds, serverId);
await redis.sAdd(`ludo:server:${serverId}:players`, playerId);
}
async function getPlayerServer(playerId) {
return redis.get(`ludo:player:${playerId}:server`);
}
// ── Real-time leaderboards ────────────────────────────────
// Key: ludo:leaderboard:{variant} | Type: Sorted Set
async function updateLeaderboardScore(variant, playerId, newRating) {
await redis.zAdd(`ludo:leaderboard:${variant}`, newRating, playerId);
}
async function getTopLeaderboard(variant, count = 10) {
return redis.zRangeWithScores(
`ludo:leaderboard:${variant}`, 0, count - 1, { REV: true }
);
}
async function getPlayerRank(variant, playerId) {
const rank = await redis.zRevRank(`ludo:leaderboard:${variant}`, playerId);
return rank !== null ? rank + 1 : null;
}
Redis Pub/Sub for Cross-Server Event Broadcasting
When a game server processes a move, it must notify all other server instances whose players are in that
game. Redis pub/sub bridges this gap: the handling server publishes to ludo:game:{id}:events,
and all other servers subscribed to that channel relay the event to their local WebSocket clients.
// Publisher (on the server that handles the move)
const publisher = new Redis({ host: process.env.REDIS_HOST, port: 6379 });
async function broadcastMove(gameId, moveEvent) {
const channel = `ludo:game:${gameId}:events`;
await publisher.publish(channel, JSON.stringify({ type: 'MOVE', payload: moveEvent, ts: Date.now() }));
}
async function broadcastGameEnd(gameId, result) {
const channel = `ludo:game:${gameId}:events`;
await publisher.publish(channel, JSON.stringify({ type: 'GAME_END', payload: result, ts: Date.now() }));
await redis.del(`ludo:game:${gameId}`);
}
// Subscriber (on every game server instance)
const subscriber = new Redis({ host: process.env.REDIS_HOST, port: 6379 });
function subscribeToGame(gameId, wsHandler) {
const channel = `ludo:game:${gameId}:events`;
subscriber.subscribe(channel);
subscriber.on('message', (ch, msg) => {
if (ch !== channel) return;
wsHandler(JSON.parse(msg)); // Relay to WebSocket clients on this server
});
}
Matchmaking Queue with Redis
Before a game starts, players enter a matchmaking pool. Redis lists implement a FIFO queue where players waiting for a match are pushed in, and opponents are popped out in order. Variant-based key partitioning keeps queues isolated.
// Key: ludo:mm:queue:{variant} | Type: List
async function enqueueMatchmaking(playerId, variant = 'classic', rating = 1000) {
const key = `ludo:mm:queue:${variant}`;
const payload = JSON.stringify({ playerId, rating, queuedAt: Date.now() });
await redis.lPush(key, payload);
await redis.expire(key, 3600);
}
async function findMatch(variant, playerCount = 4) {
const key = `ludo:mm:queue:${variant}`;
const players = [];
for (let i = 0; i < playerCount; i++) {
const raw = await redis.rPop(key);
if (!raw) break;
players.push(JSON.parse(raw));
}
return players.length === playerCount ? players : null;
}
// Rate-limit matchmaking joins: max 1 join per 10 seconds per player
async function canJoinMatchmaking(playerId) {
const key = `ludo:mm:cooldown:${playerId}`;
return (await redis.set(key, '1', 'EX', 10, 'NX')) === 'OK';
}
Table Partitioning Strategy
The moves table will grow beyond hundreds of millions of rows in a
production Ludo platform. PostgreSQL declarative partitioning splits it into monthly partitions, enabling
partition pruning for time-range queries, cheap archival of old partitions, and parallel query execution
across partitions.
-- Convert moves table to partitioned (run before any data is inserted)
CREATE TABLE moves (
id, game_id, player_id, turn_number, phase, dice_value, dice_rolls,
got_six, token_id, from_position, to_position, move_type,
captured_token_id, captured_player_id, earned_extra_turn,
game_state_snapshot, created_at,
PRIMARY KEY (id, created_at)
) PARTITION BY RANGE (created_at);
-- Create monthly partitions for 2026
CREATE TABLE moves_2026_01 PARTITION OF moves FOR VALUES FROM ('2026-01-01') TO ('2026-02-01');
CREATE TABLE moves_2026_02 PARTITION OF moves FOR VALUES FROM ('2026-02-01') TO ('2026-03-01');
CREATE TABLE moves_2026_03 PARTITION OF moves FOR VALUES FROM ('2026-03-01') TO ('2026-04-01');
CREATE TABLE moves_2026_04 PARTITION OF moves FOR VALUES FROM ('2026-04-01') TO ('2026-05-01');
CREATE TABLE moves_2026_05 PARTITION OF moves FOR VALUES FROM ('2026-05-01') TO ('2026-06-01');
CREATE TABLE moves_2026_06 PARTITION OF moves FOR VALUES FROM ('2026-06-01') TO ('2026-07-01');
CREATE TABLE moves_2026_07 PARTITION OF moves FOR VALUES FROM ('2026-07-01') TO ('2026-08-01');
CREATE TABLE moves_2026_08 PARTITION OF moves FOR VALUES FROM ('2026-08-01') TO ('2026-09-01');
CREATE TABLE moves_2026_09 PARTITION OF moves FOR VALUES FROM ('2026-09-01') TO ('2026-10-01');
CREATE TABLE moves_2026_10 PARTITION OF moves FOR VALUES FROM ('2026-10-01') TO ('2026-11-01');
CREATE TABLE moves_2026_11 PARTITION OF moves FOR VALUES FROM ('2026-11-01') TO ('2026-12-01');
CREATE TABLE moves_2026_12 PARTITION OF moves FOR VALUES FROM ('2026-12-01') TO ('2027-01-01');
-- Indexes on the partitioned table (inherited by all partitions)
CREATE INDEX idx_moves_game ON moves (game_id);
CREATE INDEX idx_moves_player ON moves (player_id);
CREATE INDEX idx_moves_game_turn ON moves (game_id, turn_number);
CREATE INDEX idx_moves_created ON moves (created_at DESC);
-- pg_cron job to auto-create next month's partition
-- SELECT cron.schedule('create_monthly_partition', '0 0 25 * *', $$
-- SELECT create_next_month_partition('moves');
-- $$);
Connection Pooling with PgBouncer
Horizontal scaling multiplies game server instances, each maintaining its own database connection pool. Without PgBouncer, 20 server instances times 20 connections each equals 400 connections — exhausting PostgreSQL's default limit of 100. PgBouncer sits in front of PostgreSQL, multiplexing hundreds of application connections onto a smaller pool of real database connections using transaction-mode pooling.
# pgbouncer.ini
[databases]
ludo_prod = host=postgres-primary.internal port=5432 dbname=ludo_prod
[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 6432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
# Connection pool settings
pool_mode = transaction # Connections borrowed per transaction
max_client_conn = 5000 # Max client-side connections
default_pool_size = 50 # Real DB connections (50 × N app servers)
min_pool_size = 10
reserve_pool_size = 5
reserve_pool_timeout = 3
server_idle_timeout = 600
# Query performance
server_reset_query = DISCARD ALL
server_check_delay = 30
server_lifetime = 3600
# Logging
log_connections = 1
log_disconnections = 1
log_pooler_errors = 1
Alembic Migration Setup
Alembic manages schema migrations in a Python/PostgreSQL project. The migration below creates all the tables defined above in a single revision. For incremental schema changes, create separate migrations per table modification.
# alembic/versions/001_initial_ludo_schema.py
"""Initial Ludo game schema: players, rooms, games, moves, tournaments, ratings."""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects.postgresql import UUID, JSONB, TIMESTAMPTZ
revision = '001_initial_ludo_schema'
down_revision = None
branch_labels = None
depends_on = None
def upgrade() -> None:
# players
op.create_table('players',
sa.Column('id', UUID(), primary_key=True),
sa.Column('username', sa.String(32), nullable=False, unique=True),
sa.Column('email', sa.String(255), nullable=False, unique=True),
sa.Column('password_hash', sa.String(255), nullable=False),
sa.Column('display_name', sa.String(64), nullable=False),
sa.Column('avatar_url', sa.Text(), nullable=True),
sa.Column('bio', sa.String(500), nullable=True),
sa.Column('total_games', sa.Integer(), nullable=False, server_default='0'),
sa.Column('wins', sa.Integer(), nullable=False, server_default='0'),
sa.Column('losses', sa.Integer(), nullable=False, server_default='0'),
sa.Column('draws', sa.Integer(), nullable=False, server_default='0'),
sa.Column('rating', sa.Integer(), nullable=False, server_default='1000'),
sa.Column('peak_rating', sa.Integer(), nullable=False, server_default='1000'),
sa.Column('rank_tier', sa.String(16), nullable=False, server_default='Bronze'),
sa.Column('coins', sa.BigInteger(), nullable=False, server_default='1000'),
sa.Column('gems', sa.Integer(), nullable=False, server_default='0'),
sa.Column('status', sa.String(16), nullable=False, server_default='active'),
sa.Column('email_verified', sa.Boolean(), nullable=False, server_default='false'),
sa.Column('created_at', TIMESTAMPTZ(), nullable=False, server_default=sa.text('NOW()')),
sa.Column('updated_at', TIMESTAMPTZ(), nullable=False, server_default=sa.text('NOW()')),
sa.Column('last_seen_at', TIMESTAMPTZ(), nullable=True),
sa.Column('deleted_at', TIMESTAMPTZ(), nullable=True),
)
op.create_index('idx_players_rating', 'players', [sa.text('rating DESC')])
op.create_index('idx_players_username', 'players', [sa.text('LOWER(username)')])
op.create_index('idx_players_last_seen', 'players', [sa.text('last_seen_at DESC')])
# rooms, games, game_players, moves, tournaments, player_ratings
# (see full SQL DDL in the PostgreSQL sections above)
# Triggers for auto-updating updated_at
op.execute("""
CREATE OR REPLACE FUNCTION update_updated_at()
RETURNS TRIGGER AS $$
BEGIN NEW.updated_at = NOW(); RETURN NEW; END;
$$ LANGUAGE plpgsql;
""")
op.execute("CREATE TRIGGER trg_players_updated_at BEFORE UPDATE ON players FOR EACH ROW EXECUTE FUNCTION update_updated_at();")
def downgrade() -> None:
op.drop_table('player_ratings')
op.drop_table('tournament_participants')
op.drop_table('tournaments')
op.drop_table('moves')
op.drop_table('game_players')
op.drop_table('games')
op.drop_table('rooms')
op.drop_table('players')
Essential Query Examples
These production-grade SQL queries cover the most common operations in a Ludo game backend. Each is designed to be fast (index-friendly) and returns structured data suitable for API responses.
Player Game History with Stats
-- Get player's last 20 games with opponent info and outcome
SELECT
g.id AS game_id,
g.variant,
g.status,
g.created_at,
g.duration_secs,
gp.final_position,
gp.is_winner,
gp.score AS player_score,
gp.tokens_finished,
json_agg(
json_build_object(
'username', p2.username,
'color', gp2.player_color,
'position', gp2.final_position,
is_winner, gp2.is_winner
)
) FILTER (WHERE gp2.player_id != $1) AS opponents
FROM games g
JOIN game_players gp ON gp.game_id = g.id AND gp.player_id = $1
JOIN game_players gp2 ON gp2.game_id = g.id
JOIN players p2 ON p2.id = gp2.player_id
WHERE g.status = 'finished'
GROUP BY g.id, g.variant, g.status, g.created_at, g.duration_secs,
gp.final_position, gp.is_winner, gp.score, gp.tokens_finished
ORDER BY g.created_at DESC
LIMIT 20;
Global Leaderboard Query
-- Top 100 global leaderboard (uses materialized view for speed)
SELECT
ROW_NUMBER() OVER (ORDER BY mv.current_rating DESC) AS rank,
p.id, p.username, p.display_name, p.avatar_url, p.rank_tier,
mv.current_rating, p.total_games, p.wins,
ROUND(p.wins::NUMERIC / NULLIF(p.total_games, 0) * 100, 1) AS win_rate
FROM mv_current_ratings mv
JOIN players p ON p.id = mv.player_id
WHERE p.status = 'active' AND p.total_games >= 10
ORDER BY mv.current_rating DESC
LIMIT 100;
-- Variant-specific leaderboard
SELECT *
FROM mv_current_ratings
WHERE variant = 'classic'
ORDER BY current_rating DESC
LIMIT 50;
Active Waiting Rooms Query
-- Find public waiting rooms by variant, sorted by earliest expiration
SELECT
r.id, r.room_code, r.game_variant, r.max_players, r.current_players,
r.turn_time_limit, r.entry_fee, r.expires_at,
(r.max_players - r.current_players) AS slots_remaining,
p.username AS host_username
FROM rooms r
JOIN players p ON p.id = r.host_player_id
WHERE r.status IN ('waiting', 'filling')
AND r.password_hash IS NULL
AND r.expires_at > NOW()
AND r.current_players < r.max_players
ORDER BY r.expires_at ASC
LIMIT 20;
ELO Rating Update Query
-- Update ELO ratings atomically after a game ends
WITH game_result AS (
SELECT gp.game_id, gp.player_id, gp.is_winner,
pr.rating_after AS current_rating
FROM game_players gp
CROSS JOIN LATERAL (
SELECT rating_after FROM player_ratings
WHERE player_id = gp.player_id
ORDER BY created_at DESC LIMIT 1
) pr
WHERE gp.game_id = $1
),
opponent_ratings AS (
SELECT gr.player_id, gr.current_rating,
AVG(gr2.current_rating) AS avg_opponent_rating,
COUNT(gr2.player_id) AS opponent_count
FROM game_result gr
JOIN game_result gr2 ON gr2.game_id = gr.game_id
AND gr2.player_id != gr.player_id
GROUP BY gr.player_id, gr.current_rating
)
INSERT INTO player_ratings
(player_id, game_id, variant, rating_before, rating_after, rating_change,
k_factor, expected_score, actual_score, match_result, opponent_count, avg_opponent_rating)
SELECT
or.player_id, $1, 'classic',
or.current_rating,
or.current_rating + FLOOR($2 * (or.current_rating - or.avg_opponent_rating) / 400)::INTEGER,
FLOOR($2 * (or.current_rating - or.avg_opponent_rating) / 400)::INTEGER,
32,
0.5,
CASE WHEN gp.is_winner THEN 1.0 ELSE 0.0 END,
CASE WHEN gp.is_winner THEN 'win' ELSE 'loss' END,
or.opponent_count, or.avg_opponent_rating
FROM opponent_ratings or
JOIN game_players gp ON gp.player_id = or.player_id
ON CONFLICT DO NOTHING;
Frequently Asked Questions
Denormalized counters (total_games, wins,
losses) on the players table enable O(1) profile reads
without expensive COUNT(*) queries on millions of game_players rows. The
rating history table (player_ratings) is append-only and powers analytics:
rating volatility analysis for anti-cheat detection, historical rank reconstruction, and
skill-progression graphs. If you computed stats on the fly, fetching a player profile would require
joining game_players, aggregating win/loss totals, and computing ELO — three sequential scans on
large tables that would produce 500ms+ response times under load.
The schema enforces integrity at multiple layers. First, player_ratings
is INSERT-only — there are no UPDATE or DELETE operations on rating records. Each row stores
rating_before and rating_after, making it
impossible to retroactively alter ratings without creating a visible anomaly (consecutive rows with
non-matching before/after values). Second, the expected_score and
actual_score columns enable post-hoc verification: if a player's actual
win rate deviates significantly from their expected score over 50+ games, the anti-cheat system flags
them. Third, the Glicko-2 volatility column tracks rating reliability —
sudden rating swings beyond 2 standard deviations trigger a manual review queue.
JSONB is appropriate for genuinely unstructured or schema-flexible data — game rules variants,
client-side metadata, and game_state_snapshot in the moves table
(where the exact board layout varies by variant). It is not appropriate for entities that
participate in foreign-key relationships, require indexed lookups, or need CHECK constraints. For example, storing player colors as JSONB within a
game document would prevent you from enforcing UNIQUE (game_id,
player_color) — the constraint that stops two players from claiming the same color. Keep
core domain entities (players, games, game_players) in proper normalized tables, and use JSONB for
configuration blobs, metadata dictionaries, and snapshot snapshots that don't need querying.
The moves table's game_state_snapshot
column already contains a full board state at every move, making replay reconstruction possible
without replaying all previous moves. The archival workflow: monthly partitions older than 90 days
are detached from the main table (ALTER TABLE moves DETACH PARTITION),
exported to Parquet files using COPY ... TO PROGRAM with a compression
pipeline, and the Parquet files are uploaded to S3. The partition is then dropped from PostgreSQL
while keeping a metadata entry in an archived_moves_manifest table that
stores the S3 URI and date range. On-demand replay retrieval checks this manifest first — if the
data is archived, it's fetched from S3 and reconstructed from the snapshots column.
With proper horizontal scaling — PgBouncer handling connection multiplexing, Redis caching hot game state, and read replicas for leaderboard queries — the limiting factor becomes CPU and memory on the application servers rather than the database. PostgreSQL with a well-tuned config (shared_buffers = 25% of RAM, work_mem = 256MB, effective_cache_size = 75% of RAM) and NVMe SSD storage handles 50,000+ TPS for simple transactional writes. At an average of 400 moves per completed game, that's 125 game completions per second — equivalent to 125,000 concurrent in-progress games (assuming 10-minute average game duration). With table partitioning, the moves table can scale to billions of rows without query degradation. For the leaderboard specifically, Redis sorted sets add essentially no upper limit — 10 million players with O(log N) ZADD and ZRANGE operations is well within Redis 7's capabilities.
The game_players.disconnected_at and is_afk columns track connection state per player per game. On WebSocket
disconnect, a background job (or Redis key expiration) sets disconnected_at
and starts a grace timer (configurable, default 60 seconds). If the player reconnects within the
grace period, disconnected_at is cleared and their turn is held. If
the timer expires, the server marks is_afk = TRUE and either assigns
an AI bot (if the game mode supports it) or auto-forfeits after two consecutive AFK marks. The
player_ratings entry for a forfeited game records it as a loss for the
AFK player, preventing AFK farming.
The pattern is write-through on game completion and write-behind on individual moves. Individual
moves go directly to PostgreSQL (the authoritative record) and asynchronously update Redis via a
background worker or Lua script that batches Redis writes every 100ms or 10 moves, whichever comes
first. This reduces Redis write pressure during peak move velocity while keeping latency at
sub-millisecond for reads. On game completion, the server fetches the final state from PostgreSQL,
writes it to Redis with a short TTL (300s), broadcasts via pub/sub to notify other servers, and
then persists to the game_state_snapshot column in the moves table. If
a Redis key is missing when a player requests game state, the server falls back to PostgreSQL
(reconstructing from the last move's snapshot) — the cache miss is transparent to the client.
Need Help Designing Your Ludo Game Database?
Get expert guidance on schema optimization, migration planning, Redis caching strategy, and scaling your Ludo game platform to thousands of concurrent players.
Chat on WhatsApp