Database Cache Layer
Overview
Implement multi-tier caching strategies using Redis, application-level in-memory caches, and query result caching to reduce database load and improve read latency. This skill covers cache-aside, write-through, and write-behind patterns with proper invalidation strategies, TTL configuration, and cache stampede prevention.
Prerequisites
- Redis server (6.x+) available or Docker for running
docker run redis:7-alpine
redis-cli installed for cache inspection and debugging
- Application framework with Redis client library (ioredis, redis-py, Jedis, go-redis)
- Database query profiling data identifying read-heavy and slow queries
- Understanding of data freshness requirements (how stale can cached data be)
- Monitoring tools for cache hit rate and Redis memory usage
Instructions
- Profile database queries to identify caching candidates. Focus on queries that: execute more than 100 times per minute, take longer than 50ms, return data that changes less frequently than every 5 minutes, and produce results smaller than 1MB. Use
pgstatstatements or MySQL slow query log.
- Design the cache key schema with a consistent naming convention:
service:entity:identifier:variant. Examples: app:user:12345:profile, app:products:category:electronics:page:1. Include a version prefix to enable bulk invalidation: v2:app:user:12345.
- Implement the cache-aside pattern for read-heavy data:
- Check Redis first:
GET app:user:12345:profile
- On cache miss: query database, then
SET app:user:12345:profile EX 3600
- On data update:
DEL app:user:12345:profile to invalidate
- Wrap in a helper function that abstracts cache-then-database logic
- Configure TTL values based on data change frequency:
- Static reference data (countries, categories): TTL 24 hours or longer
- User profile data: TTL 15-60 minutes
- Product listings: TTL 5-15 minutes
- Session data: TTL matching session timeout
- Real-time data (inventory counts, prices): TTL 30-60 seconds or skip caching
- Implement cache stampede prevention for high-traffic cache keys:
- Probabilistic early expiration: Refresh cache at
TTL * 0.8 with probability 1 / concurrent_requests
- Distributed lock: Use
SET key:lock NX EX 5 to let one request refresh while others serve stale data
- Stale-while-revalidate: Serve expired cache while refreshing in background
- Add application-level L1 cache using an in-memory LRU cache (Node.js:
lru-cache, Python: cache