Load Testing APIs
Overview
Execute comprehensive load, stress, and soak tests to validate API performance, identify bottlenecks, and establish throughput baselines. Generate test scripts for k6, Artillery, or wrk that simulate realistic traffic patterns with configurable virtual user ramp-up, request distribution, and failure threshold assertions.
Prerequisites
- Load testing tool installed: k6 (recommended), Artillery, wrk, or Apache JMeter
- Target API deployed in a staging/performance environment (never load test production without safeguards)
- Monitoring stack accessible: Grafana/Prometheus, Datadog, or CloudWatch for correlating test results with server metrics
- API authentication credentials for testing (API keys, test user JWT tokens)
- Baseline performance SLOs defined (target p95 latency, max error rate, minimum throughput)
Instructions
- Read the API specification and route definitions using Glob and Read to build a complete list of endpoints, identifying high-traffic paths and resource-intensive operations.
- Define test scenarios modeling realistic user behavior: browsing (80% reads), checkout (mixed reads + writes), and spike traffic patterns with appropriate think times between requests.
- Generate k6 or Artillery test scripts with configurable stages: ramp-up (2 min), sustained load (10 min), spike (2 min at 3x), and cool-down (2 min).
- Configure request distribution to match production traffic patterns -- weighted random selection across endpoints rather than uniform distribution.
- Add threshold assertions for pass/fail criteria: p95 response time < 500ms, error rate < 1%, throughput > 100 requests/second.
- Implement data-driven requests using CSV or JSON fixtures for realistic payloads, unique user IDs, and varied query parameters to avoid cache-only testing.
- Execute baseline test at expected production load, then gradually increase to 2x, 5x, and 10x to identify the breaking point and saturation behavior.
- Analyze results: correlate latency spikes with server metrics (CPU, memory, DB connections, event loop lag), identify the bottleneck (database, network, compute), and document findings.
- Generate a performance report comparing results against SLO thresholds with recommendations for optimization.
See ${CLAUDESKILLDIR}/references/implementation.md for the full implementation guide.
Output
${CLAUDESKILLDIR}/load-tests/scenarios/ - k6/Artillery test scripts per traffic scenario
${CLAUDESKILLDIR}/load-tests/data/ - Test data fixtures (users, payloads, tokens)
${CLAUDESKILLDIR}/load-tests/thresholds.json - Pass/fail threshold configuration
${CLAUDESKILLDIR}/reports/load-test-results.json - Raw test result