Monitoring APIs
Overview
Build real-time API monitoring with metrics collection (request rate, latency percentiles, error rates), health check endpoints, and alerting rules. Instrument API middleware to emit Prometheus metrics or StatsD counters, configure Grafana dashboards with SLO tracking, and implement synthetic monitoring probes for uptime verification.
Prerequisites
- Prometheus + Grafana stack, or Datadog/New Relic/CloudWatch for metrics and dashboards
- Metrics client library:
prom-client (Node.js), prometheus_client (Python), or Micrometer (Java)
- Alerting channel configured: PagerDuty, Slack webhook, or email for alert routing
- Structured logging library: Winston, Pino (Node.js), structlog (Python), or Logback (Java)
- Synthetic monitoring tool: Checkly, Uptime Robot, or custom cron-based health probes
Instructions
- Examine existing middleware and logging setup using Grep and Read to identify current observability coverage and gaps.
- Implement metrics middleware that records per-request data:
httprequestdurationseconds histogram (with method, path, status labels), httprequeststotal counter, and httprequestsinflight gauge.
- Create a
/health endpoint returning structured health status including dependency checks (database connectivity, cache availability, external service reachability) with response time for each.
- Add a
/ready endpoint separate from health that returns 503 during startup initialization and graceful shutdown, for load balancer integration.
- Configure histogram buckets aligned with SLO targets: [0.01, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10] seconds for comprehensive latency distribution.
- Build Grafana dashboard panels: request rate (QPS), p50/p95/p99 latency, error rate percentage, active connections, and per-endpoint breakdown.
- Define alerting rules: error rate > 5% for 5 minutes (critical), p99 latency > 2s for 10 minutes (warning), health check failure for 3 consecutive probes (critical).
- Implement synthetic monitoring that sends periodic requests to critical endpoints from external locations, measuring availability and latency from the consumer perspective.
- Add SLO tracking with error budget calculation: define SLO (99.9% availability, p95 < 500ms), compute burn rate, and alert when error budget consumption exceeds projected pace.
See ${CLAUDESKILLDIR}/references/implementation.md for the full implementation guide.
Output
${CLAUDESKILLDIR}/src/middleware/metrics.js - Prometheus metrics collection middleware
${CLAUDESKILLDIR}/src/routes/health.js - Health check and readiness endpoints
${CLAUDESKILLDIR