Next.js 16 cacheComponents: Migrating 91,000 Pages from App Router Caching
We'd been running a large e-commerce catalog on Next.js 14's App Router for about eighteen months when Next.js 16 dropped. 91,247 pages. Product listings, category trees, editorial content, localized variants across 14 markets. The old caching model -- where Server Components were cached by default -- had become a minefield of stale data bugs and revalidateTag spaghetti. When the Next.js team announced cacheComponents and the shift to no-caching-by-default in Next.js 15 (carried forward and refined in v16), we knew it was time. This is the story of that migration: what worked, what didn't, and the performance numbers on the other side.
Table of Contents
- The Caching Problem We Actually Had
- What Changed in Next.js 15 and 16
- Understanding cacheComponents
- Our Migration Strategy for 91,000 Pages
- Implementation: Step by Step
- Performance Results and Benchmarks
- Pitfalls and Gotchas
- When You Should and Shouldn't Use cacheComponents
- FAQ

The Caching Problem We Actually Had
Let me paint the picture. In Next.js 14's App Router, fetch requests in Server Components were cached by default. The Data Cache persisted across deployments. The Full Route Cache stored rendered HTML and RSC payloads at build time. And the Router Cache on the client side kept prefetched segments around for... well, longer than you'd expect.
For a site with 91,000 pages, this default-cache-everything approach created two categories of problems:
Stale data everywhere. Product prices updated in our headless CMS (Sanity, in our case) but the cached fetch results stuck around. We had revalidateTag calls scattered across 47 different server actions. Miss one tag? A customer sees yesterday's price. We literally had a Slack channel called #cache-crimes where the content team reported stale pages.
Build times from hell. Full static generation of 91,000 pages took over 3 hours. We'd moved to ISR with revalidate: 3600 for most pages, but the interaction between ISR, the Data Cache, and on-demand revalidation was genuinely hard to reason about. New developers on the team would spend their first two weeks just understanding the caching layers.
The Mental Model Tax
Here's what I think people underestimate: the cognitive cost of implicit caching. When caching is the default and you opt out, every new component requires you to ask "should this be cached?" and then remember to add the right directive if the answer is no. When no-caching is the default and you opt in, you only think about caching when you actively want it. That's a fundamentally different -- and better -- mental model.
What Changed in Next.js 15 and 16
Next.js 15 was the big philosophical shift. The team flipped the defaults:
| Behavior | Next.js 14 | Next.js 15 | Next.js 16 |
|---|---|---|---|
fetch() in Server Components |
Cached by default | Not cached by default | Not cached by default |
| Route Handlers (GET) | Cached by default | Not cached by default | Not cached by default |
| Client Router Cache | 30s (dynamic) / 5min (static) | 0s for page segments | 0s default, configurable |
| Full Route Cache | Enabled for static routes | Same | Same, with cacheLife refinements |
| Component-level caching | unstable_cache |
use cache directive (experimental) |
cacheComponents API (stable) |
Next.js 15 introduced the use cache directive as an experimental feature behind a flag. Next.js 16, released in early 2025, stabilized this as the cacheComponents configuration option and the associated "use cache" directive, alongside cacheLife for defining custom cache profiles and cacheTag for targeted invalidation.
The key insight: caching moved from being an implicit framework behavior to an explicit developer choice at the component level. This is a huge deal for large sites.
Understanding cacheComponents
The cacheComponents feature in next.config.js enables component-level caching through the "use cache" directive. Here's the basic setup:
// next.config.js (Next.js 16)
const nextConfig = {
experimental: {
cacheComponents: true,
},
};
module.exports = nextConfig;
Once enabled, you can add "use cache" at the top of any async Server Component, server action, or even a layout file:
// app/products/[slug]/page.tsx
"use cache";
import { cacheLife, cacheTag } from 'next/cache';
export default async function ProductPage({ params }: { params: { slug: string } }) {
cacheLife('products'); // custom cache profile
cacheTag(`product-${params.slug}`);
const product = await fetchProduct(params.slug);
return (
<div>
<h1>{product.name}</h1>
<ProductDetails product={product} />
<DynamicPricing productId={product.id} /> {/* This component is NOT cached */}
</div>
);
}
cacheLife Profiles
This is where it gets interesting for large sites. You define named cache profiles in next.config.js:
const nextConfig = {
experimental: {
cacheComponents: true,
cacheLife: {
products: {
stale: 300, // serve stale for 5 minutes
revalidate: 3600, // revalidate after 1 hour
expire: 86400, // hard expire after 24 hours
},
editorial: {
stale: 3600,
revalidate: 86400,
expire: 604800, // 7 days
},
navigation: {
stale: 86400,
revalidate: 604800,
expire: 2592000, // 30 days
},
},
},
};
The three-tier model (stale, revalidate, expire) maps nicely to stale-while-revalidate semantics. During the stale window, cached content is served immediately. After stale but before expire, a background revalidation kicks in. After expire, the cache entry is gone.
cacheTag for Invalidation
The cacheTag function replaces the old revalidateTag pattern with something more composable:
import { revalidateTag } from 'next/cache';
// In a webhook handler or server action:
export async function handleProductUpdate(productSlug: string) {
revalidateTag(`product-${productSlug}`);
revalidateTag('product-listing'); // invalidate listing pages too
}
This part didn't change much from Next.js 15, but it works much better with cacheComponents because you're tagging specific cached components rather than trying to invalidate opaque framework-level caches.

Our Migration Strategy for 91,000 Pages
We didn't do this in one shot. With 91,000 pages across 14 locales, a big-bang migration would've been reckless. Here's how we broke it down:
Phase 1: Upgrade to Next.js 16, No Cache Changes (Week 1-2)
We upgraded from Next.js 14.2 to 16.0 without enabling cacheComponents. This alone changed behavior because fetch requests were no longer cached by default. We expected TTFB regressions and we got them:
- Average TTFB went from 180ms to 340ms on product pages
- Origin server load increased by ~60% (our Sanity CDN held up fine, but our custom API endpoints did not)
- ISR revalidation actually got faster because there was less cache state to manage
This confirmed what we suspected: we'd been leaning heavily on implicit caching, and many of our pages genuinely needed caching -- just explicit, intentional caching.
Phase 2: Audit and Classify Pages (Week 3)
We categorized every route in our app:
| Page Type | Count | Cache Strategy | cacheLife Profile |
|---|---|---|---|
| Product detail pages | 42,000 | Cache with product tag | products (5min stale / 1hr revalidate) |
| Category listing pages | 3,200 | Cache with category tag | products (5min stale / 1hr revalidate) |
| Editorial/blog pages | 8,400 | Cache aggressively | editorial (1hr stale / 24hr revalidate) |
| Localized variants | 31,647 | Same as base page | Inherited from base |
| Account/dynamic pages | 6,000 | No cache | N/A |
Phase 3: Enable cacheComponents, Add Directives (Week 4-6)
We enabled the flag and started adding "use cache" directives. The key decision: we cached at the page level for most routes, but at the component level for pages with mixed static/dynamic content.
For product pages, the product info and images were cached, but the pricing component and inventory status were left uncached:
// components/ProductInfo.tsx
"use cache";
import { cacheLife, cacheTag } from 'next/cache';
export async function ProductInfo({ slug }: { slug: string }) {
cacheLife('products');
cacheTag(`product-${slug}`, 'product-info');
const product = await getProduct(slug);
return (
<section>
<h1>{product.name}</h1>
<p>{product.description}</p>
<ProductImages images={product.images} />
</section>
);
}
// components/DynamicPricing.tsx
// NO "use cache" directive -- always fresh
export async function DynamicPricing({ productId }: { productId: string }) {
const pricing = await getPricing(productId); // hits pricing API every request
return (
<div className="pricing">
<span className="price">${pricing.current}</span>
{pricing.onSale && <span className="was-price">${pricing.original}</span>}
</div>
);
}
Phase 4: Webhook Integration (Week 7)
We rewired our Sanity webhooks to call revalidateTag with the right tags. This was actually simpler than our old setup because tags were now explicit in the code, not scattered across fetch options.
// app/api/revalidate/route.ts
import { revalidateTag } from 'next/cache';
import { NextRequest } from 'next/server';
export async function POST(request: NextRequest) {
const body = await request.json();
const secret = request.headers.get('x-webhook-secret');
if (secret !== process.env.REVALIDATION_SECRET) {
return new Response('Unauthorized', { status: 401 });
}
switch (body._type) {
case 'product':
revalidateTag(`product-${body.slug.current}`);
revalidateTag('product-listing');
break;
case 'category':
revalidateTag(`category-${body.slug.current}`);
revalidateTag('navigation');
break;
case 'article':
revalidateTag(`article-${body.slug.current}`);
break;
}
return new Response('OK', { status: 200 });
}
Implementation: Step by Step
If you're doing a similar migration, here's the practical playbook we'd recommend (and what we now use for Next.js development projects at Social Animal):
Step 1: Enable the Flag
// next.config.js
module.exports = {
experimental: {
cacheComponents: true,
cacheLife: {
// Start with sensible defaults
default: {
stale: 60,
revalidate: 900,
expire: 86400,
},
},
},
};
Step 2: Find Your Hot Paths
Use your analytics to identify the pages that get the most traffic and where TTFB matters most. For us, it was category pages (high traffic, relatively stable content) and product pages (high traffic, moderately dynamic content).
Step 3: Add `"use cache"` Top-Down
Start with layouts. If your root layout fetches navigation data, cache that first -- it's the highest-impact, lowest-risk change:
// app/layout.tsx
// Note: "use cache" on layouts caches the layout shell
// Child pages still render independently
import { Navigation } from '@/components/Navigation';
export default async function RootLayout({ children }) {
return (
<html>
<body>
<Navigation /> {/* This component has its own "use cache" */}
{children}
</body>
</html>
);
}
Step 4: Set Up Monitoring
We used Vercel's built-in analytics plus custom logging to track cache hit rates. In the first week after enabling cacheComponents, our cache hit rate was only 34%. After tuning stale durations, it climbed to 78%.
Performance Results and Benchmarks
Here are the real numbers after the full migration, measured over a 30-day period on Vercel's Pro plan:
| Metric | Before (Next.js 14) | After Phase 1 (v16, no cache) | After Full Migration |
|---|---|---|---|
| Avg TTFB (product pages) | 180ms | 340ms | 95ms |
| Avg TTFB (category pages) | 220ms | 410ms | 72ms |
| Avg TTFB (editorial pages) | 150ms | 280ms | 45ms |
| P99 TTFB (all pages) | 1,200ms | 2,100ms | 380ms |
| Build time (full) | 3h 12min | 2h 48min | 48min |
| Vercel function invocations/day | 2.4M | 3.8M | 1.1M |
| Monthly Vercel bill | ~$840 | ~$1,200 | ~$520 |
| Cache hit rate | Unknown (implicit) | N/A | 78% |
| Stale content incidents (#cache-crimes) | 8-12/week | 0 | 1-2/month |
The build time improvement deserves explanation. With cacheComponents, we moved away from generating all 91,000 pages at build time. Instead, we statically generated only the top 5,000 pages (by traffic) and let the rest generate on-demand with caching. The cacheComponents directive meant those on-demand pages got cached after first visit, with cacheLife controlling staleness.
The Vercel bill drop was significant. Fewer function invocations (because of explicit component caching) plus shorter build times meant real cost savings. That ~$320/month reduction pays for itself.
Pitfalls and Gotchas
Serialization Boundaries
The "use cache" directive creates a serialization boundary. Everything passed into a cached component as props must be serializable. We had several components that received callback functions or React elements as props -- those broke immediately. The fix was restructuring to use composition patterns instead:
// ❌ This breaks with "use cache"
"use cache";
export async function ProductCard({ product, onAddToCart }) {
// onAddToCart is a function -- not serializable!
}
// ✅ This works
"use cache";
export async function ProductCard({ product }) {
return (
<div>
<h2>{product.name}</h2>
{/* AddToCart is a Client Component, not cached */}
<AddToCartButton productId={product.id} />
</div>
);
}
Dynamic Params and Cache Key Explosion
With 91,000 pages, each with unique params, the cache key space is enormous. We hit Vercel's edge cache limits in the first week and had to be more strategic about which pages got long expire values. Low-traffic long-tail pages got shorter cache durations.
The `Date.now()` Trap
Any component using "use cache" that calls Date.now() or new Date() inside the cached function will cache that timestamp. We found this in a "last updated" display that showed the same time for hours. The fix: move time-sensitive logic to a Client Component or an uncached Server Component.
Nested Cache Boundaries
When you nest cached components inside other cached components, the inner cache has its own lifecycle. This is powerful but confusing. We established a team convention: cache at the page level OR the component level, not both, unless there's a clear reason.
When You Should and Shouldn't Use cacheComponents
Use it when:
- You have more than a few hundred pages and ISR build times are painful
- Your content has clear freshness requirements that vary by section
- You need granular control over what's cached vs. always-fresh
- You're running on Vercel or a platform that supports the Next.js cache layers
- You want to reduce infrastructure costs on high-traffic sites
Don't use it when:
- Your site is small enough that full SSG works fine
- Every page is fully dynamic (user-specific content everywhere)
- You're not on a hosting platform that supports Next.js caching infrastructure
- Your team is new to Next.js -- get comfortable with the basics first
If you're evaluating whether your project needs this level of caching control, or if a different framework like Astro might be a better fit for your content-heavy site, that's worth thinking through before committing to a migration.
For projects where content comes from multiple headless CMS sources, the cacheTag system in Next.js 16 works beautifully with headless CMS architectures -- each content type gets its own invalidation channel.
FAQ
What is cacheComponents in Next.js 16?
cacheComponents is an experimental configuration option in Next.js 16 that enables the "use cache" directive for Server Components. It lets you explicitly mark which components should be cached and define custom cache profiles using cacheLife. It's the stable evolution of the use cache directive that was experimental in Next.js 15.
How is cacheComponents different from ISR (Incremental Static Regeneration)?
ISR caches entire pages and revalidates them on a time-based schedule. cacheComponents lets you cache individual components within a page, each with different cache lifetimes. A single page can have a header cached for 24 hours, product info cached for 1 hour, and pricing that's never cached. ISR can't do that -- it's all or nothing at the page level.
Do I need to be on Vercel to use cacheComponents?
No, but the experience is best on Vercel because the caching infrastructure is tightly integrated. Self-hosted Next.js deployments can use cacheComponents with the file system cache adapter, but you won't get the edge distribution benefits. Platforms like Netlify and Cloudflare are adding support, but as of mid-2025, Vercel remains the most complete implementation.
How do I invalidate cached components in Next.js 16?
You use cacheTag() inside your cached component to assign tags, then call revalidateTag('tag-name') from a server action, route handler, or webhook endpoint. This invalidates all cached components with that tag. It's the same API from Next.js 15, but it's more useful now because you're tagging explicit cached components rather than implicit framework caches.
Will cacheComponents reduce my Vercel bill? It can significantly reduce costs. In our case, function invocations dropped by 54% because cached component responses were served from the cache layer instead of invoking serverless functions. The build time reduction also saves on build minutes. Your mileage will vary based on traffic patterns and cache hit rates -- check Vercel's pricing calculator with your current usage.
What happens if I add "use cache" to a component that receives non-serializable props?
You'll get a build error. The "use cache" directive creates a serialization boundary, so all props must be serializable (strings, numbers, plain objects, arrays). Functions, React elements, class instances, and other non-serializable values will cause the build to fail. Restructure your component to accept only data props and handle interactivity in child Client Components.
Can I use cacheComponents with React Server Components from other frameworks?
No. cacheComponents is a Next.js-specific feature that builds on top of React's Server Components. While the "use cache" directive syntax may eventually become a React standard, the cacheLife profiles and cacheTag system are Next.js APIs. If you're using a framework like Remix or a custom RSC setup, you'll need different caching strategies.
How long does it take to migrate a large Next.js site to cacheComponents? For our 91,000-page site with a team of 4 developers, the full migration took 7 weeks including testing and monitoring. A smaller site (under 10,000 pages) with a simpler data model could probably do it in 1-2 weeks. The actual code changes are straightforward -- the time goes into auditing your caching needs, testing invalidation flows, and monitoring cache hit rates after deployment. If you'd rather not go it alone, reach out to us -- we've done this a few times now.