Skip to content

Query_optimization


Query optimization ensures your database queries run efficiently, minimizing resource usage and response time.

Query Optimization Layers
================================================================================
+------------------+
| Application | <-- Query structure, N+1 prevention
+------------------+
|
v
+------------------+
| TypeORM | <-- QueryBuilder, relations, caching
+------------------+
|
v
+------------------+
| Database | <-- Indexes, execution plan
+------------------+
|
v
+------------------+
| Storage | <-- Disk I/O, memory
+------------------+
Optimization Goals:
- Reduce query execution time
- Minimize memory usage
- Reduce database load
- Improve scalability
================================================================================

// Enable query logging
await dataSource.initialize({
// ...
logging: true, // Log all queries
logger: 'advanced-console', // Detailed logging
});
// Analyze specific query with EXPLAIN
const query = await userRepository
.createQueryBuilder('user')
.leftJoinAndSelect('user.posts', 'post')
.where('user.isActive = :active', { active: true })
.getSql();
// Run EXPLAIN ANALYZE
const analysis = await dataSource.query(
`EXPLAIN ANALYZE ${query}`
);
console.log(analysis);
PostgreSQL EXPLAIN Output
================================================================================
EXPLAIN ANALYZE SELECT * FROM users WHERE email = 'test@example.com';
Result:
+------------------------------------------------------------------+
| Index Scan using idx_user_email on users (cost=0.28..8.29 rows=1)|
| Index Cond: (email = 'test@example.com'::text) |
| Planning Time: 0.100 ms |
| Execution Time: 0.025 ms |
+------------------------------------------------------------------+
Key Metrics:
- cost: Estimated computational cost
- rows: Estimated rows returned
- Planning Time: Time to create execution plan
- Execution Time: Actual query execution time
================================================================================

// Entity with indexes
@Entity()
@Index(['email']) // Single column index
@Index(['firstName', 'lastName']) // Composite index
@Index(['createdAt', 'status']) // Composite for common queries
export class User {
@PrimaryGeneratedColumn()
id: number;
@Column()
@Index() // Column-level index
email: string;
@Column()
firstName: string;
@Column()
lastName: string;
@Column()
status: string;
@CreateDateColumn()
createdAt: Date;
}
// Unique index
@Entity()
@Index(['email'], { unique: true })
export class User {
@Column()
email: string;
}
// Partial index (PostgreSQL)
@Entity()
@Index(
['status', 'createdAt'],
{ where: 'status = \'active\'' }
)
export class User {
@Column()
status: string;
@CreateDateColumn()
createdAt: Date;
}
Index Types
================================================================================
+-------------------+------------------------------------------+
| Type | Use Case |
+-------------------+------------------------------------------+
| B-Tree | Default, good for equality and range |
| Hash | Equality comparisons only |
| GiST | Geometric data, full-text search |
| GIN | Array, JSON, full-text search |
| BRIN | Large tables, ordered data |
+-------------------+------------------------------------------+
When to Index:
- Columns in WHERE clauses
- Columns in JOIN conditions
- Columns in ORDER BY
- Columns in GROUP BY
- Foreign key columns
When NOT to Index:
- Small tables (< 1000 rows)
- Columns with low selectivity
- Columns that change frequently
- Columns rarely used in queries
================================================================================
src/migrations/AddUserIndexes.ts
import { MigrationInterface, QueryRunner, TableIndex } from 'typeorm';
export class AddUserIndexes1700000001 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<void> {
// Single column index
await queryRunner.createIndex(
'users',
new TableIndex({
name: 'IDX_USER_EMAIL',
columnNames: ['email'],
isUnique: true,
}),
);
// Composite index
await queryRunner.createIndex(
'users',
new TableIndex({
name: 'IDX_USER_NAME_STATUS',
columnNames: ['firstName', 'lastName', 'status'],
}),
);
// Partial index (PostgreSQL)
await queryRunner.query(`
CREATE INDEX IDX_USER_ACTIVE
ON users (createdAt)
WHERE status = 'active'
`);
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.dropIndex('users', 'IDX_USER_ACTIVE');
await queryRunner.dropIndex('users', 'IDX_USER_NAME_STATUS');
await queryRunner.dropIndex('users', 'IDX_USER_EMAIL');
}
}

N+1 Problem Visualization
================================================================================
Problematic Code:
const users = await userRepository.find(); // 1 query
for (const user of users) {
const posts = await postRepository.find({ // N queries!
where: { author: { id: user.id } }
});
}
Total Queries: 1 + N (where N = number of users)
Example with 100 users:
- 1 query to get users
- 100 queries to get posts for each user
- Total: 101 queries!
================================================================================
// Solution 1: Eager Loading with relations
const users = await userRepository.find({
relations: ['posts'],
});
// Solution 2: QueryBuilder with join
const users = await userRepository
.createQueryBuilder('user')
.leftJoinAndSelect('user.posts', 'post')
.getMany();
// Solution 3: Use IN clause with map
const users = await userRepository.find();
const userIds = users.map(u => u.id);
const posts = await postRepository
.createQueryBuilder('post')
.where('post.authorId IN (:...userIds)', { userIds })
.getMany();
// Map posts to users
const userPostsMap = new Map<number, Post[]>();
posts.forEach(post => {
if (!userPostsMap.has(post.authorId)) {
userPostsMap.set(post.authorId, []);
}
userPostsMap.get(post.authorId)!.push(post);
});
users.forEach(user => {
user.posts = userPostsMap.get(user.id) || [];
});
// Solution 4: Data Loader pattern (for GraphQL)
import DataLoader from 'dataloader';
const postLoader = new DataLoader(async (userIds: number[]) => {
const posts = await postRepository
.createQueryBuilder('post')
.where('post.authorId IN (:...userIds)', { userIds })
.getMany();
const postsMap = new Map<number, Post[]>();
posts.forEach(post => {
if (!postsMap.has(post.authorId)) {
postsMap.set(post.authorId, []);
}
postsMap.get(post.authorId)!.push(post);
});
return userIds.map(id => postsMap.get(id) || []);
});
// Usage
const posts = await postLoader.load(userId);

// Bad: Select all columns
const users = await userRepository.find();
// Good: Select only needed columns
const users = await userRepository
.createQueryBuilder('user')
.select(['user.id', 'user.name', 'user.email'])
.getMany();
// Using find options
const users = await userRepository.find({
select: ['id', 'name', 'email'],
});
// Exclude sensitive columns
@Entity()
export class User {
@Column({ select: false }) // Excluded by default
password: string;
}
// Pagination
const users = await userRepository.find({
skip: (page - 1) * limit,
take: limit,
});
// QueryBuilder pagination
const users = await userRepository
.createQueryBuilder('user')
.skip((page - 1) * limit)
.take(limit)
.getMany();
// Cursor-based pagination (more efficient for large datasets)
const users = await userRepository
.createQueryBuilder('user')
.where('user.id > :cursor', { cursor: lastId })
.orderBy('user.id', 'ASC')
.take(limit)
.getMany();

Join Types Comparison
================================================================================
+-------------------+------------------------------------------+
| Join Type | Use Case |
+-------------------+------------------------------------------+
| INNER JOIN | Only matching rows from both tables |
| LEFT JOIN | All from left, matching from right |
| RIGHT JOIN | All from right, matching from left |
| FULL JOIN | All rows from both tables |
| CROSS JOIN | Cartesian product |
+-------------------+------------------------------------------+
Performance Tips:
- INNER JOIN is usually fastest
- Use LEFT JOIN only when needed
- Index join columns
- Filter early in WHERE clause
================================================================================
// Inner join (faster when you only need matching records)
const usersWithPosts = await userRepository
.createQueryBuilder('user')
.innerJoin('user.posts', 'post')
.getMany();
// Left join with condition
const usersWithPublishedPosts = await userRepository
.createQueryBuilder('user')
.leftJoin('user.posts', 'post', 'post.isPublished = :published', {
published: true,
})
.getMany();
// Join without selecting (for WHERE conditions only)
const usersWithActivePosts = await userRepository
.createQueryBuilder('user')
.leftJoin('user.posts', 'post')
.where('post.status = :status', { status: 'active' })
.select(['user.id', 'user.name'])
.distinct(true)
.getMany();
// Multiple joins optimization
const results = await userRepository
.createQueryBuilder('user')
.leftJoinAndSelect('user.profile', 'profile')
.leftJoinAndSelect('user.posts', 'post')
.leftJoin('post.comments', 'comment') // Don't select, just filter
.where('comment.isApproved = :approved', { approved: true })
.getMany();

// Bad: Insert one by one
for (const userData of usersData) {
await userRepository.save(userData);
}
// Good: Batch insert
await userRepository
.createQueryBuilder()
.insert()
.values(usersData)
.execute();
// Or using repository
await userRepository.save(usersData);
// Large batch: Insert in chunks
const chunkSize = 1000;
for (let i = 0; i < usersData.length; i += chunkSize) {
const chunk = usersData.slice(i, i + chunkSize);
await userRepository.insert(chunk);
}
// Bad: Update one by one
for (const user of users) {
user.status = 'active';
await userRepository.save(user);
}
// Good: Batch update
await userRepository
.createQueryBuilder()
.update(User)
.set({ status: 'active' })
.where('id IN (:...ids)', { ids: userIds })
.execute();
// Using CASE for different values
await userRepository
.createQueryBuilder()
.update(User)
.set({
status: () => `
CASE
WHEN id = 1 THEN 'active'
WHEN id = 2 THEN 'inactive'
ELSE 'pending'
END
`,
})
.where('id IN (:...ids)', { ids: [1, 2, 3] })
.execute();
// Bad: Delete one by one
for (const id of userIds) {
await userRepository.delete(id);
}
// Good: Batch delete
await userRepository
.createQueryBuilder()
.delete()
.where('id IN (:...ids)', { ids: userIds })
.execute();
// Or using repository
await userRepository.delete(userIds);

// Enable in DataSource config
await dataSource.initialize({
// ...
cache: {
type: 'redis', // or 'ioredis', 'database'
options: {
host: 'localhost',
port: 6379,
},
duration: 30000, // 30 seconds default
},
});
// Cache specific query
const users = await userRepository
.createQueryBuilder('user')
.cache(true) // Use default cache duration
.getMany();
// Cache with custom duration
const users = await userRepository
.createQueryBuilder('user')
.cache(60000) // 60 seconds
.getMany();
// Cache with custom key
const users = await userRepository
.createQueryBuilder('user')
.cache('users_active_list', 60000)
.getMany();
// Clear cache
await dataSource.queryResultCache.clear();
await dataSource.queryResultCache.remove(['users_active_list']);
src/cache/cache.service.ts
import { Injectable, Inject } from '@nestjs/common';
import { CACHE_MANAGER } from '@nestjs/cache-manager';
import { Cache } from 'cache-manager';
@Injectable()
export class CacheService {
constructor(
@Inject(CACHE_MANAGER)
private cacheManager: Cache,
) {}
async getOrSet<T>(
key: string,
factory: () => Promise<T>,
ttl: number = 30000,
): Promise<T> {
const cached = await this.cacheManager.get<T>(key);
if (cached) {
return cached;
}
const value = await factory();
await this.cacheManager.set(key, value, ttl);
return value;
}
async invalidate(pattern: string): Promise<void> {
// For Redis-based cache
await this.cacheManager.del(pattern);
}
}
// Usage in service
@Injectable()
export class UsersService {
constructor(
private cacheService: CacheService,
private userRepository: Repository<User>,
) {}
async findOne(id: number): Promise<User> {
return this.cacheService.getOrSet(
`user:${id}`,
() => this.userRepository.findOne({ where: { id } }),
60000,
);
}
async update(id: number, data: UpdateUserDto): Promise<User> {
const user = await this.userRepository.save({ id, ...data });
await this.cacheService.invalidate(`user:${id}`);
return user;
}
}

src/config/database.config.ts
export default new DataSource({
type: 'postgres',
// ... connection settings
// Connection pool settings
poolSize: 20, // Number of connections in pool
// Advanced pool settings (for PostgreSQL)
extra: {
max: 20, // Maximum connections
min: 5, // Minimum connections
idleTimeoutMillis: 30000, // Close idle connections after 30s
connectionTimeoutMillis: 2000, // Connection timeout
acquireTimeoutMillis: 60000, // Acquire connection timeout
},
});
Connection Pool Sizing
================================================================================
Formula:
Pool Size = (Core Count * 2) + Effective Spindle Count
For typical web application:
- Development: 5-10 connections
- Production: 20-50 connections
- High-traffic: 50-100 connections
Considerations:
- Database server limits
- Memory per connection
- Expected concurrent requests
- Query execution time
Warning Signs:
- Connection timeouts
- "Too many connections" errors
- Slow response times under load
================================================================================

Query Optimization Checklist
================================================================================
[ ] 1. Analyze slow queries with EXPLAIN ANALYZE
[ ] 2. Add indexes on frequently queried columns
[ ] 3. Use composite indexes for multi-column queries
[ ] 4. Prevent N+1 queries with proper joins
[ ] 5. Select only required columns
[ ] 6. Implement pagination for large result sets
[ ] 7. Use batch operations for bulk changes
[ ] 8. Cache frequently accessed data
[ ] 9. Configure connection pool appropriately
[ ] 10. Monitor query performance regularly
================================================================================

Query Optimization Quick Reference
+------------------------------------------------------------------+
| |
| Technique | Benefit |
| -------------------|------------------------------------------|
| Indexing | Faster data lookup |
| N+1 Prevention | Reduce query count |
| Column Selection | Reduce data transfer |
| Pagination | Limit result size |
| Batch Operations | Reduce round trips |
| Query Caching | Skip database entirely |
| Connection Pool | Reuse connections |
| |
| Tools | Purpose |
| -------------------|------------------------------------------|
| EXPLAIN ANALYZE | Understand query plan |
| Query Logging | Identify slow queries |
| Performance Monitor| Track metrics over time |
| |
+------------------------------------------------------------------+

Chapter 32: Indexing Strategies


Last Updated: February 2026