Database Design Patterns for Modern Apps
Essential patterns for scalable database architecture, from normalization strategies to distributed system considerations.
Database Design Patterns for Modern Apps
Effective database design is the foundation of any scalable application. Over the years, I've encountered various patterns and anti-patterns that can make or break your application's performance and maintainability. Let's explore the essential patterns every developer should know.
The Fundamentals: Normalization vs Denormalization
When to Normalize
Normalization is your friend when:
- Data consistency is critical (financial systems, user accounts)
- Storage space is limited (though less relevant today)
- Write operations are frequent and you want to avoid update anomalies
_15-- Normalized approach: User and Profile tables_15CREATE TABLE users (_15 id SERIAL PRIMARY KEY,_15 email VARCHAR(255) UNIQUE NOT NULL,_15 created_at TIMESTAMP DEFAULT NOW()_15);_15_15CREATE TABLE user_profiles (_15 id SERIAL PRIMARY KEY,_15 user_id INTEGER REFERENCES users(id),_15 first_name VARCHAR(100),_15 last_name VARCHAR(100),_15 bio TEXT,_15 avatar_url VARCHAR(500)_15);
When to Denormalize
Denormalization makes sense when:
- Read performance is critical (analytics, reporting)
- Complex joins are becoming bottlenecks
- You're dealing with immutable data (logs, events)
_10-- Denormalized approach: Everything in one table_10CREATE TABLE user_activity_log (_10 id SERIAL PRIMARY KEY,_10 user_id INTEGER,_10 user_email VARCHAR(255),_10 user_name VARCHAR(200),_10 action VARCHAR(100),_10 timestamp TIMESTAMP,_10 metadata JSONB_10);
Essential Design Patterns
1. Repository Pattern
The Repository pattern abstracts your data access logic, making your code more testable and maintainable.
_29interface UserRepository {_29 findById(id: string): Promise<User | null>;_29 findByEmail(email: string): Promise<User | null>;_29 create(userData: CreateUserData): Promise<User>;_29 update(id: string, userData: UpdateUserData): Promise<User>;_29 delete(id: string): Promise<void>;_29}_29_29class PostgresUserRepository implements UserRepository {_29 constructor(private db: Database) {}_29_29 async findById(id: string): Promise<User | null> {_29 const result = await this.db.query(_29 'SELECT * FROM users WHERE id = $1',_29 [id]_29 );_29 return result.rows[0] || null;_29 }_29_29 async create(userData: CreateUserData): Promise<User> {_29 const result = await this.db.query(_29 'INSERT INTO users (email, name) VALUES ($1, $2) RETURNING *',_29 [userData.email, userData.name]_29 );_29 return result.rows[0];_29 }_29_29 // ... other methods_29}
2. Unit of Work Pattern
Manages transactions and ensures data consistency across multiple operations.
_65class UnitOfWork {_65 private repositories: Map<string, any> = new Map();_65 private isInTransaction = false;_65_65 constructor(private db: Database) {}_65_65 async begin(): Promise<void> {_65 if (this.isInTransaction) {_65 throw new Error('Transaction already in progress');_65 }_65 await this.db.query('BEGIN');_65 this.isInTransaction = true;_65 }_65_65 async commit(): Promise<void> {_65 if (!this.isInTransaction) {_65 throw new Error('No transaction in progress');_65 }_65 await this.db.query('COMMIT');_65 this.isInTransaction = false;_65 }_65_65 async rollback(): Promise<void> {_65 if (!this.isInTransaction) {_65 throw new Error('No transaction in progress');_65 }_65 await this.db.query('ROLLBACK');_65 this.isInTransaction = false;_65 }_65_65 getRepository<T>(type: new (db: Database) => T): T {_65 const key = type.name;_65 if (!this.repositories.has(key)) {_65 this.repositories.set(key, new type(this.db));_65 }_65 return this.repositories.get(key);_65 }_65}_65_65// Usage_65async function transferMoney(fromUserId: string, toUserId: string, amount: number) {_65 const uow = new UnitOfWork(database);_65 _65 try {_65 await uow.begin();_65 _65 const userRepo = uow.getRepository(UserRepository);_65 const accountRepo = uow.getRepository(AccountRepository);_65 _65 const fromAccount = await accountRepo.findByUserId(fromUserId);_65 const toAccount = await accountRepo.findByUserId(toUserId);_65 _65 if (fromAccount.balance < amount) {_65 throw new Error('Insufficient funds');_65 }_65 _65 await accountRepo.updateBalance(fromUserId, fromAccount.balance - amount);_65 await accountRepo.updateBalance(toUserId, toAccount.balance + amount);_65 _65 await uow.commit();_65 } catch (error) {_65 await uow.rollback();_65 throw error;_65 }_65}
3. CQRS (Command Query Responsibility Segregation)
Separates read and write operations for better scalability and performance.
_58// Command side - optimized for writes_58interface CreateUserCommand {_58 email: string;_58 name: string;_58 password: string;_58}_58_58class UserCommandHandler {_58 constructor(private userRepo: UserRepository) {}_58_58 async handle(command: CreateUserCommand): Promise<void> {_58 const hashedPassword = await bcrypt.hash(command.password, 10);_58 _58 await this.userRepo.create({_58 email: command.email,_58 name: command.name,_58 password: hashedPassword,_58 });_58_58 // Publish event for read model updates_58 await this.eventBus.publish(new UserCreatedEvent({_58 email: command.email,_58 name: command.name,_58 }));_58 }_58}_58_58// Query side - optimized for reads_58interface UserListQuery {_58 page: number;_58 limit: number;_58 search?: string;_58 sortBy?: string;_58}_58_58class UserQueryHandler {_58 constructor(private readDb: ReadDatabase) {}_58_58 async handle(query: UserListQuery): Promise<UserListResult> {_58 // Use denormalized read model for fast queries_58 return await this.readDb.query(`_58 SELECT _58 u.id,_58 u.email,_58 u.name,_58 u.created_at,_58 COUNT(p.id) as post_count,_58 AVG(r.rating) as avg_rating_58 FROM user_read_model u_58 LEFT JOIN posts p ON u.id = p.user_id_58 LEFT JOIN ratings r ON u.id = r.user_id_58 WHERE ($3::text IS NULL OR u.name ILIKE $3)_58 GROUP BY u.id, u.email, u.name, u.created_at_58 ORDER BY ${query.sortBy || 'u.created_at'} DESC_58 LIMIT $1 OFFSET $2_58 `, [query.limit, (query.page - 1) * query.limit, query.search]);_58 }_58}
Advanced Patterns
Event Sourcing
Store all changes as a sequence of events rather than just the current state.
_52interface Event {_52 id: string;_52 aggregateId: string;_52 type: string;_52 data: any;_52 timestamp: Date;_52 version: number;_52}_52_52class EventStore {_52 async saveEvents(aggregateId: string, events: Event[], expectedVersion: number): Promise<void> {_52 const client = await this.db.connect();_52 _52 try {_52 await client.query('BEGIN');_52 _52 // Check current version_52 const versionResult = await client.query(_52 'SELECT COALESCE(MAX(version), 0) as version FROM events WHERE aggregate_id = $1',_52 [aggregateId]_52 );_52 _52 const currentVersion = versionResult.rows[0].version;_52 if (currentVersion !== expectedVersion) {_52 throw new Error('Concurrency conflict');_52 }_52 _52 // Insert events_52 for (const event of events) {_52 await client.query(_52 'INSERT INTO events (id, aggregate_id, type, data, timestamp, version) VALUES ($1, $2, $3, $4, $5, $6)',_52 [event.id, event.aggregateId, event.type, event.data, event.timestamp, event.version]_52 );_52 }_52 _52 await client.query('COMMIT');_52 } catch (error) {_52 await client.query('ROLLBACK');_52 throw error;_52 } finally {_52 client.release();_52 }_52 }_52_52 async getEvents(aggregateId: string): Promise<Event[]> {_52 const result = await this.db.query(_52 'SELECT * FROM events WHERE aggregate_id = $1 ORDER BY version',_52 [aggregateId]_52 );_52 return result.rows;_52 }_52}
Saga Pattern
Manage distributed transactions across multiple services.
_31class OrderSaga {_31 async handle(orderCreatedEvent: OrderCreatedEvent): Promise<void> {_31 const sagaId = generateId();_31 _31 try {_31 // Step 1: Reserve inventory_31 await this.inventoryService.reserveItems(orderCreatedEvent.items);_31 _31 // Step 2: Process payment_31 await this.paymentService.processPayment(orderCreatedEvent.paymentInfo);_31 _31 // Step 3: Ship order_31 await this.shippingService.createShipment(orderCreatedEvent.shippingInfo);_31 _31 // Success - complete the saga_31 await this.orderService.completeOrder(orderCreatedEvent.orderId);_31 _31 } catch (error) {_31 // Compensate in reverse order_31 await this.compensate(sagaId, error);_31 }_31 }_31_31 private async compensate(sagaId: string, error: Error): Promise<void> {_31 // Implement compensation logic_31 await this.shippingService.cancelShipment(sagaId);_31 await this.paymentService.refundPayment(sagaId);_31 await this.inventoryService.releaseReservation(sagaId);_31 await this.orderService.cancelOrder(sagaId);_31 }_31}
Performance Optimization Patterns
Connection Pooling
_25class DatabasePool {_25 private pool: Pool;_25_25 constructor(config: PoolConfig) {_25 this.pool = new Pool({_25 host: config.host,_25 port: config.port,_25 database: config.database,_25 user: config.user,_25 password: config.password,_25 max: 20, // Maximum number of connections_25 idleTimeoutMillis: 30000,_25 connectionTimeoutMillis: 2000,_25 });_25 }_25_25 async query(text: string, params?: any[]): Promise<QueryResult> {_25 const client = await this.pool.connect();_25 try {_25 return await client.query(text, params);_25 } finally {_25 client.release();_25 }_25 }_25}
Read Replicas
_18class DatabaseManager {_18 constructor(_18 private writeDb: Database,_18 private readReplicas: Database[]_18 ) {}_18_18 async write(query: string, params?: any[]): Promise<QueryResult> {_18 return await this.writeDb.query(query, params);_18 }_18_18 async read(query: string, params?: any[]): Promise<QueryResult> {_18 // Round-robin load balancing across read replicas_18 const replica = this.readReplicas[_18 Math.floor(Math.random() * this.readReplicas.length)_18 ];_18 return await replica.query(query, params);_18 }_18}
Schema Migration Strategies
Backward Compatible Migrations
_12-- Instead of dropping columns immediately_12ALTER TABLE users ADD COLUMN new_email VARCHAR(255);_12_12-- Migrate data_12UPDATE users SET new_email = old_email WHERE new_email IS NULL;_12_12-- Add constraints after data migration_12ALTER TABLE users ALTER COLUMN new_email SET NOT NULL;_12ALTER TABLE users ADD CONSTRAINT users_new_email_unique UNIQUE (new_email);_12_12-- Drop old column in a later migration_12-- ALTER TABLE users DROP COLUMN old_email;
Blue-Green Deployments for Schema Changes
_15class MigrationManager {_15 async performBlueGreenMigration(migration: Migration): Promise<void> {_15 // 1. Create new schema version (Green)_15 await this.createGreenSchema(migration);_15 _15 // 2. Migrate data to green schema_15 await this.migrateData(migration);_15 _15 // 3. Switch application to green schema_15 await this.switchToGreen();_15 _15 // 4. Cleanup blue schema (after verification)_15 setTimeout(() => this.cleanupBlueSchema(), 24 * 60 * 60 * 1000); // 24 hours_15 }_15}
Monitoring and Observability
_31class DatabaseMetrics {_31 async collectMetrics(): Promise<DatabaseMetrics> {_31 const [connectionStats, queryStats, tableStats] = await Promise.all([_31 this.getConnectionStats(),_31 this.getSlowQueries(),_31 this.getTableSizes(),_31 ]);_31_31 return {_31 connections: connectionStats,_31 slowQueries: queryStats,_31 tableSizes: tableStats,_31 timestamp: new Date(),_31 };_31 }_31_31 private async getSlowQueries(): Promise<SlowQuery[]> {_31 return await this.db.query(`_31 SELECT _31 query,_31 calls,_31 total_time,_31 mean_time,_31 rows_31 FROM pg_stat_statements _31 WHERE mean_time > 1000 _31 ORDER BY mean_time DESC _31 LIMIT 10_31 `);_31 }_31}
Conclusion
Database design patterns are tools in your toolkit—use them wisely based on your specific requirements:
- Start simple with normalized designs
- Denormalize strategically for performance bottlenecks
- Use patterns like Repository and Unit of Work for maintainable code
- Consider CQRS and Event Sourcing for complex domains
- Monitor and optimize continuously
Remember, premature optimization is the root of all evil, but so is ignoring performance until it's too late. Design for your current needs while keeping future scalability in mind.
The key is understanding your data access patterns, consistency requirements, and performance constraints. Choose patterns that align with these requirements rather than following trends blindly.
What database patterns have you found most effective in your projects? Share your experiences on Twitter!