
Slow database queries are among the most common performance bottlenecks in modern applications. A single unoptimized query can cripple response times, increase server load, and degrade user experience. This guide explores proven techniques to analyze, optimize, and maintain high-performance database queries across SQL and NoSQL systems.
1. Query Performance Analysis
Identifying Problem Queries
Method | Tools | What to Look For |
---|---|---|
Slow Query Logs | MySQL Slow Log, PostgreSQL log_min_duration_statement | Queries >100ms |
Execution Plans | EXPLAIN ANALYZE (SQL), .explain() (MongoDB) |
Full table scans, missing indexes |
Monitoring Dashboards | Datadog, New Relic, pgBadger | CPU-heavy queries, lock contention |
Example: PostgreSQL EXPLAIN
EXPLAIN ANALYZE SELECT * FROM users WHERE last_login < '2023-01-01';![]()
2. Indexing Strategies
Effective Index Design
- B-Tree Indexes (Default for most cases)
CREATE INDEX idx_users_email ON users(email);
- Partial Indexes (For filtered queries)
CREATE INDEX idx_active_users ON users(id) WHERE is_active = true;
- Composite Indexes (Multiple columns)
CREATE INDEX idx_user_geo ON users(country, city);
Indexing Pitfalls
- ❌ Over-indexing (Slows down writes)
- ❌ Unused indexes (Check with
pg_stat_all_indexes
) - ❌ Wrong column order in composite indexes
3. Query Restructuring
Optimization Patterns
Issue | Solution | Performance Gain |
---|---|---|
SELECT * |
Specify only needed columns | 2-5x faster |
OR conditions |
Use UNION ALL instead |
Up to 10x faster |
Nested subqueries | Convert to JOINs | 3-8x faster |
LIKE '%term%' |
Full-text search (TSVECTOR) | 100x faster |
Example: Subquery → JOIN
-- Before (Slow) SELECT * FROM orders WHERE customer_id IN (SELECT id FROM customers WHERE premium = true); -- After (Optimized) SELECT o.* FROM orders o JOIN customers c ON o.customer_id = c.id WHERE c.premium = true;
4. Database-Specific Techniques
SQL Databases
- PostgreSQL
- Use
VACUUM ANALYZE
regularly - Optimize
work_mem
for sorts/joins
- Use
- MySQL
- Enable
query_cache
for read-heavy apps - Use
FORCE INDEX
when optimizer fails
- Enable
NoSQL Databases
- MongoDB
- Create covered queries (
projection + index
) - Use
$lookup
carefully (avoid Cartesian products)
- Create covered queries (
- Redis
- Prefer
SCAN
overKEYS
- Use hash tags for cluster optimization
- Prefer
5. Advanced Optimization
Partitioning
-- Range partitioning by date CREATE TABLE logs ( id SERIAL, log_date DATE ) PARTITION BY RANGE (log_date);
Materialized Views
CREATE MATERIALIZED VIEW top_products AS SELECT product_id, COUNT(*) FROM orders GROUP BY product_id REFRESH EVERY 1 HOUR;
Connection Pooling
- Configure pool size (usually
CPU cores * 2 + 1
) - Use PgBouncer for PostgreSQL
6. Monitoring & Maintenance
Preventive Measures
- Weekly Checks
- Long-running transactions
- Index fragmentation
- Monthly Tasks
- Update statistics (
ANALYZE
) - Review query plans for regressions
- Update statistics (
Alerting Rules
- Queries exceeding 500ms
- More than 1000 sequential scans/hour
- Lock waits > 200ms
Conclusion
Database optimization is an iterative process requiring:
- Baseline Measurement – Identify slow queries
- Targeted Indexing – Add strategic indexes
- Query Refactoring – Rewrite inefficient logic
- Continuous Monitoring – Prevent regressions
Key Statistics:
- Proper indexing can improve queries by 100-10,000x
- 83% of database performance issues stem from poorly written queries (SolarWinds survey)
graph TD A[Identify Slow Queries] --> B[Analyze Execution Plan] B --> C{Missing Index?} C -->|Yes| D[Add Appropriate Index] C -->|No| E[Restructure Query] D --> F[Verify Improvement] E --> F
Next Steps:
- Enable slow query logging
- Pick 3 worst-performing queries to optimize
- Schedule monthly query plan reviews
By systematically applying these techniques, you can achieve sub-50ms query performance even at scale.