analyzing-query-performance

Execute use when you need to work with query optimization. This skill provides query performance analysis with comprehensive guidance and automation. Trigger with phrases like "optimize queries", "analyze performance", or "improve query speed".

claude-codecodexopenclaw
8 Tools
query-performance-analyzer Plugin
database Category

Allowed Tools

ReadWriteEditGrepGlobBash(psql:*)Bash(mysql:*)Bash(mongosh:*)

Provided by Plugin

query-performance-analyzer

Analyze query performance with EXPLAIN plan interpretation, bottleneck identification, and optimization recommendations

database v1.0.0
View Plugin

Installation

This skill is included in the query-performance-analyzer plugin:

/plugin install query-performance-analyzer@claude-code-plugins-plus

Click to copy

Instructions

Query Performance Analyzer

Overview

Analyze slow database queries using execution plans, wait statistics, and I/O metrics across PostgreSQL, MySQL, and MongoDB. This skill captures EXPLAIN output, identifies sequential scans on large tables, detects missing indexes, measures buffer cache hit ratios, and produces actionable optimization recommendations ranked by expected performance impact.

Prerequisites

  • Database credentials with permissions to run EXPLAIN ANALYZE (PostgreSQL), EXPLAIN FORMAT=JSON (MySQL), or explain() (MongoDB)
  • pgstatstatements extension enabled for PostgreSQL (provides aggregated query statistics)
  • Access to slow query logs or performance_schema (MySQL)
  • Baseline query execution times for comparison
  • psql, mysql, or mongosh CLI tools installed

Instructions

  1. Identify the slowest queries by examining pgstatstatements (PostgreSQL): SELECT query, calls, meanexectime, totalexectime FROM pgstatstatements ORDER BY meanexectime DESC LIMIT 20. For MySQL, enable and query the slow query log or performanceschema.eventsstatementssummaryby_digest.
  1. Run EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON) on each slow query in PostgreSQL, or EXPLAIN ANALYZE FORMAT=JSON in MySQL. Capture the full execution plan including actual row counts, loop iterations, and buffer usage.
  1. Analyze the execution plan for these red flags:
  • Sequential scans on tables with >10,000 rows (indicates missing index)
  • Nested loop joins with high outer row counts (consider hash join or merge join)
  • Sort operations without index support (adding a covering index eliminates the sort)
  • High rowsremovedby_filter relative to rows (predicate not selective enough)
  • Bitmap heap scans with high recheck rate (index selectivity too low)
  1. Check buffer cache performance: SELECT heapblksread, heapblkshit, heapblkshit::float / (heapblkshit + heapblksread) AS cachehitratio FROM pgstatiousertables WHERE relname = 'tablename'. A ratio below 0.95 suggests the working set exceeds available shared_buffers.
  1. Evaluate index usage with SELECT indexrelname, idxscan, idxtupread, idxtupfetch FROM pgstatuserindexes WHERE schemaname = 'public' ORDER BY idx_scan ASC. Indexes with zero scans are unused and waste write performance.
  1. Check for table bloat using SELECT relname, nlivetup, ndeadtup, ndeadtup::float / GREATEST(nlivetup, 1) AS deadratio FROM pgstatusertables WHERE ndeadtup > 1000 ORDER BY dead_ratio DESC. A dead tuple ratio above 0.2 indicates the table needs VACUUM.
  1. For each identified issue, generate a specific recommendation: CREATE INDEX statement with the exact columns, query rewrite suggestions, or configuration parameter adjustments.
  1. Estimate the performance impact of each recommendation by comparing the EXPLAIN plan before and after applying the change on a staging database or by analyzing the expected row reduction from new indexes.
  1. Prioritize recommendations by impact-to-effort ratio: index additions (high impact, low effort) before query rewrites (medium impact, medium effort) before schema changes (high impact, high effort).
  1. Generate a performance analysis report with before/after execution plans, estimated improvements, and implementation priority ranking.

Output

  • Slow query inventory with execution frequency, mean/P95 duration, and total time consumed
  • Annotated execution plans highlighting sequential scans, sort bottlenecks, and join inefficiencies
  • Index recommendations as ready-to-execute CREATE INDEX statements with expected impact
  • Query rewrite suggestions with original and optimized SQL side by side
  • Buffer cache analysis with shared_buffers sizing recommendations
  • Performance report ranking all findings by severity and implementation priority

Error Handling

Error Cause Solution
EXPLAIN ANALYZE takes too long on production Query modifies data or runs for minutes Use EXPLAIN without ANALYZE for estimated plans; run EXPLAIN ANALYZE on staging with representative data
pgstatstatements not available Extension not installed or not in sharedpreloadlibraries Run CREATE EXTENSION pgstatstatements; add to sharedpreloadlibraries in postgresql.conf and restart
Execution plan differs between staging and production Different data distribution, statistics, or configuration Run ANALYZE on staging tables to update statistics; match workmem, randompagecost, and effectivecache_size settings
Index recommendation causes slow writes Too many indexes on a write-heavy table Limit indexes to 5-7 per table; use partial indexes to reduce scope; consider covering indexes to replace multiple single-column indexes
Query plan uses wrong index Stale statistics or cost model miscalculation Run ANALYZE tablename to refresh statistics; adjust randompagecost for SSD storage; use SET enableseqscan = off to test index plans

Examples

Optimizing a dashboard aggregate query: A query computing daily revenue with GROUP BY date and JOIN across orders and lineitems takes 12 seconds. EXPLAIN reveals a sequential scan on lineitems (5M rows). Adding a composite index on (orderid, createdat) with INCLUDE (amount) reduces execution to 200ms by enabling an index-only scan.

Diagnosing N+1 query pattern: Application loads a list page showing 50 products, each with a separate query for category name. pgstatstatements reveals SELECT name FROM categories WHERE id = $1 called 50 times per page load. Resolution: rewrite as a single JOIN query or implement eager loading in the ORM.

Identifying bloated table causing cache misses: Buffer cache hit ratio drops to 0.78 on the sessions table. Investigation reveals 80% dead tuples due to aggressive INSERT/DELETE cycling without autovacuum tuning. Setting autovacuumvacuumscale_factor = 0.01 and running VACUUM FULL restores cache hit ratio to 0.99.

Resources

  • PostgreSQL EXPLAIN documentation: https://www.postgresql.org/docs/current/using-explain.html
  • pgstatstatements reference: https://www.postgresql.org/docs/current/pgstatstatements.html
  • MySQL EXPLAIN output format: https://dev.mysql.com/doc/refman/8.0/en/explain-output.html
  • Use The Index, Luke (SQL indexing guide): https://use-the-index-luke.com/
  • pgMustard EXPLAIN visualizer: https://www.pgmustard.com/

Ready to use query-performance-analyzer?