How I Cut API Response Time from 40s to ≤100ms with Redis
At TravellersLounge, we had a problem: loading the tours page took 40 seconds. Users were bouncing. The fix ended up being a complete rethink of how we were using Redis — and the result was a 99% reduction in response time.
The Original Setup
The previous architecture was calling 7 different travel APIs on every page load, in sequence. No caching, no batching, no fallbacks. Each API had its own latency, and they compounded.
Request → API 1 (4s) → API 2 (6s) → API 3 (5s) → ... → Response (40s)
The Fix: Cache-Aside with Smart TTLs
The key insight was that tour data doesn't change every second. We could cache it aggressively and invalidate on write.
async function getTours(destination: string) {
const cacheKey = `tours:${destination}`;
// Try cache first
const cached = await redis.get(cacheKey);
if (cached) return JSON.parse(cached);
// Fetch from APIs in parallel
const [api1, api2, api3] = await Promise.all([
fetchFromAPI1(destination),
fetchFromAPI2(destination),
fetchFromAPI3(destination),
]);
const merged = mergeTourData(api1, api2, api3);
// Cache for 1 hour
await redis.setex(cacheKey, 3600, JSON.stringify(merged));
return merged;
}
Two changes made the biggest difference:
- Parallel fetching —
Promise.allinstead of sequentialawait - Cache-aside pattern — check Redis before hitting any external API
Results
| Metric | Before | After | |--------|--------|-------| | Cold load | 40s | ~2s | | Cached load | 40s | ≤100ms | | Cache hit rate | 0% | ~94% |
Lessons
- Redis is only as good as your cache key design. Be specific enough to avoid stale data, broad enough to get hits.
- Always instrument your cache hit/miss ratio. If it's below 80%, your TTL or key strategy is wrong.
Promise.allis free performance. Use it whenever requests are independent.
The full migration also included moving from WordPress to Next.js with SSR — that's a separate post.