I've been running Next.js in production since version 9. For most of that time, Vercel was the obvious choice — deploy, forget, move on. But somewhere around 2024, the invoices started looking like car payments. When your hosting bill for a marketing site eclipses your actual cloud infrastructure costs, something's broken. That's when I started digging into OpenNext, and after migrating three production apps off Vercel in the past year, I have opinions.

This isn't a "Vercel bad" article. Vercel is genuinely excellent. But it's not the only option anymore, and for many teams, it's not the right one. Let me walk you through everything I've learned about self-hosting Next.js with OpenNext — the good, the ugly, and the surprisingly affordable.

Table of Contents

OpenNext: Self-Host Next.js on AWS, Cloudflare, or VPS Without Vercel

What Is OpenNext and Why Does It Exist

Next.js was designed to run on Vercel. That's not conspiracy — it's architecture. Features like ISR (Incremental Static Regeneration), middleware, image optimization, and server actions all have Vercel-specific implementations baked in. When you try to next start on a random server, you get a subset of what Next.js can do.

OpenNext is an open-source adapter that takes your Next.js build output and transforms it into deployment packages that work on other platforms. It started as an SST community project focused on AWS Lambda, but as of v3 (the current major version in 2025), it supports multiple deployment targets including Cloudflare Workers, traditional Node.js servers, and more.

Here's what OpenNext actually handles:

  • ISR and revalidation — The tag-based revalidation system that Vercel implements with their internal infrastructure? OpenNext recreates it using DynamoDB + SQS on AWS, or KV stores on Cloudflare.
  • Image optimization — Next.js's <Image> component relies on an optimization API. OpenNext packages a Sharp-based optimizer or routes to platform-specific solutions.
  • Middleware — Runs at the edge on Vercel. OpenNext maps this to CloudFront Functions, Cloudflare Workers, or runs it in-process on VPS.
  • Server actions — Full support, routed through the appropriate server function.
  • Streaming and partial prerendering — Support has matured significantly in OpenNext v3.x.

What OpenNext Is Not

It's not a hosting platform. It's not a CDN. It's a build adapter — a translation layer between Next.js's output and your infrastructure. You still need to actually run the thing somewhere.

The Self-Hosting Landscape in 2025-2026

The ecosystem has exploded since I first started looking at this. Here's where things stand:

Platform OpenNext Support Maturity Best For
AWS (via SST) First-class Production-ready Teams already on AWS
Cloudflare Workers Official adapter Stable (some edge cases) Edge-first apps, cost optimization
Docker/VPS Community + official Stable Simple deployments, existing infra
Kubernetes Community Helm charts Maturing Enterprise, existing K8s clusters
Netlify Built-in (own adapter) Production-ready Netlify-committed teams
Google Cloud Run Community Experimental GCP shops

The two paths I've personally battle-tested and can vouch for are AWS via SST and Docker on a VPS. Cloudflare is the exciting newcomer that's getting better monthly.

Deployment Target: AWS with SST

This is the golden path. SST (Serverless Stack) has built-in Next.js support powered by OpenNext, and it's where the most engineering effort has gone.

Architecture Overview

When you deploy Next.js via SST on AWS, here's what gets created:

  • CloudFront distribution — Your CDN, handles static assets and routing
  • Lambda function(s) — Server-side rendering, API routes, server actions
  • S3 bucket — Static assets, pre-rendered pages, ISR cache
  • DynamoDB table — ISR tag mapping for revalidation
  • SQS queue — Async revalidation processing
  • CloudFront Function or Lambda@Edge — Middleware execution

Sounds like a lot. It is. But SST abstracts all of it into about 20 lines of config.

SST Configuration

Here's a real sst.config.ts from one of my production projects:

/// <reference path="./.sst/platform/config.d.ts" />

export default $config({
  app(input) {
    return {
      name: "my-nextjs-app",
      removal: input.stage === "production" ? "retain" : "remove",
      home: "aws",
      providers: {
        aws: {
          region: "us-east-1",
        },
      },
    };
  },
  async run() {
    const site = new sst.aws.Nextjs("Site", {
      domain: {
        name: "myapp.com",
        dns: sst.aws.dns(),
      },
      warm: 5, // keep 5 Lambda instances warm
      memory: "1024 MB",
      environment: {
        DATABASE_URL: process.env.DATABASE_URL!,
        NEXT_PUBLIC_API_URL: "https://api.myapp.com",
      },
    });

    return {
      url: site.url,
    };
  },
});

Then deploy:

npx sst deploy --stage production

First deployment takes 8-12 minutes (CloudFront distribution propagation). Subsequent deploys are 2-4 minutes.

Lambda Considerations

The biggest gotcha with Lambda-based hosting is cold starts. Next.js server functions aren't tiny — you're looking at 20-80MB bundles depending on your dependencies. Cold starts range from 800ms to 3 seconds.

Mitigations I've used:

  1. Provisioned concurrency — SST's warm parameter keeps instances hot. At $0.0000041667 per GB-second, keeping 5 instances of a 1GB function warm costs ~$15/month.
  2. Smaller bundles — Audit your server-side dependencies. I found a project importing lodash server-side when we only needed lodash/get. Bundle dropped from 68MB to 31MB.
  3. Regional deployment — Don't use Lambda@Edge for SSR unless you absolutely need it. Single-region Lambda with CloudFront caching is fine for 95% of apps.

OpenNext: Self-Host Next.js on AWS, Cloudflare, or VPS Without Vercel - architecture

Deployment Target: Cloudflare Workers

Cloudflare's been making serious moves. Their Workers runtime now supports enough Node.js APIs that Next.js can actually run there, and the OpenNext Cloudflare adapter has gotten remarkably stable.

Setup with OpenNext Cloudflare

npm install @opennext/cloudflare

Add to your wrangler.toml:

name = "my-nextjs-app"
main = ".open-next/worker.js"
compatibility_date = "2025-01-01"
compatibility_flags = ["nodejs_compat_v2"]

[assets]
directory = ".open-next/assets"
binding = "ASSETS"

[[kv_namespaces]]
binding = "NEXT_CACHE_KV"
id = "your-kv-namespace-id"

Build and deploy:

npx @opennext/cloudflare build
npx wrangler deploy

The Cloudflare Tradeoffs

Pros:

  • No cold starts — Workers spin up in under 5ms globally
  • Global edge by default — Your SSR runs in 300+ locations
  • Absurd pricing — $5/month for 10 million requests on the paid plan

Cons:

  • Memory limits — 128MB on free, 256MB on paid. Large Next.js apps can hit this.
  • CPU time limits — 30 seconds on paid plan. Heavy SSR pages can be an issue.
  • Node.js compatibility gaps — Most things work, but if you're using native Node modules like sharp directly, you'll need workarounds. Cloudflare Images can handle optimization instead.
  • Some Next.js features are unsupported — As of early 2025, partial prerendering support is still experimental on Cloudflare.

For content-heavy sites and marketing pages, Cloudflare Workers is incredibly compelling. For complex web apps with heavy server-side logic, I'd still lean toward AWS or Docker.

Deployment Target: VPS with Docker

Sometimes you just want a server. No Lambda functions, no edge runtime, no 47-service architecture diagram. A box that runs your code. I respect that.

The Dockerfile

Here's the production Dockerfile I use. It's multi-stage, optimized, and actually works:

# Stage 1: Dependencies
FROM node:20-alpine AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json pnpm-lock.yaml ./
RUN corepack enable pnpm && pnpm install --frozen-lockfile

# Stage 2: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .

ENV NEXT_TELEMETRY_DISABLED=1
ENV NODE_ENV=production

RUN corepack enable pnpm && pnpm build

# Stage 3: Production
FROM node:20-alpine AS runner
WORKDIR /app

ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1

RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs

COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static

USER nextjs

EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"

CMD ["node", "server.js"]

Critical: you need output: 'standalone' in your next.config.js:

/** @type {import('next').NextConfig} */
const nextConfig = {
  output: 'standalone',
};

module.exports = nextConfig;

VPS Recommendations

I've run this setup on several providers:

Provider Spec Monthly Cost Notes
Hetzner CAX21 4 vCPU ARM, 8GB RAM €7.49 (~$8) Best value, EU datacenters
DigitalOcean Droplet 2 vCPU, 4GB RAM $24 Good US coverage
Fly.io (machine) 2 vCPU, 4GB RAM ~$30 Auto-scaling, global regions
Railway Usage-based $5-50 Easiest setup, Vercel-like DX
AWS EC2 t4g.medium 2 vCPU, 4GB RAM ~$25 Already on AWS

For a straightforward Docker deployment, Hetzner is absurdly good value. I run a Next.js app serving 2M+ pageviews/month on a €7.49 Hetzner ARM instance behind Cloudflare's free CDN tier. The server barely breaks a sweat.

What You Lose with Docker/VPS

Let's be honest about what next start on a VPS doesn't give you compared to Vercel or the SST setup:

  • ISR revalidation is basic — File-system cache only. No distributed cache across multiple instances. If you're running a single server, this is fine. Multi-server? You need Redis or a shared cache layer.
  • No edge middleware — Middleware runs in-process, which is totally fine for most use cases.
  • Image optimization — Works via Sharp, but you're serving optimized images from your single origin. Put Cloudflare or a CDN in front.
  • No atomic deployments — You need to handle zero-downtime deploys yourself (Docker Compose with health checks, or a reverse proxy like Caddy/Traefik).

For most apps at Social Animal, especially the headless CMS builds we do through our headless CMS development work, a single VPS with a CDN in front is perfectly adequate.

Cost Comparison: Vercel vs Self-Hosted

Let's talk money. This is based on real billing data from a Next.js app doing ~5M requests/month with ISR, image optimization, and moderate server-side rendering.

Cost Factor Vercel Pro Vercel Enterprise AWS/SST Cloudflare Hetzner VPS
Base platform $20/user/mo Custom (~$3k+/mo) $0 $5/mo €7.49/mo
Compute/requests $150-400/mo Included $40-80/mo $0-15/mo Included
Bandwidth (100GB) Included Included $8.50 (CloudFront) Included Included
Image optimization $50-200/mo Included $5-15/mo (Lambda) $5/mo (CF Images) Included (Sharp)
ISR/Cache Included Included $2-5/mo (DynamoDB) $0-5/mo (KV) $0
Estimated Total $300-700/mo $3,000+/mo $55-110/mo $10-25/mo $8-15/mo

Those Vercel numbers aren't hypothetical. I've seen the invoices. The per-seat pricing, function execution overages, and bandwidth charges on Pro tier add up fast for teams of 5+.

The AWS/SST numbers assume moderate traffic with provisioned concurrency. Cloudflare's pricing is genuinely wild — it's hard to spend real money there unless you're doing something exotic.

When to Leave Vercel

Don't leave just because you can. Leave because you should. Here's my framework:

Stay on Vercel if:

  • Your team is small (1-3 devs) and developer time is your most expensive resource
  • You're spending under $100/month on Vercel
  • You don't have anyone who enjoys infrastructure work
  • You're iterating fast and need instant previews for every PR
  • You're using Vercel-specific features like Analytics, Speed Insights, or Vercel AI SDK integrations

Leave Vercel if:

  • Monthly bill exceeds $500 and growing
  • You need infrastructure in specific regions for compliance (GDPR, data residency)
  • You're already running significant AWS/GCP/Cloudflare infrastructure
  • Cold starts on serverless are unacceptable for your use case
  • You need custom caching strategies that don't fit Vercel's model
  • You've hit Vercel's function size limits or execution time limits

Seriously consider leaving if:

  • You're on Vercel Enterprise pricing and the contract renewal just came in
  • Your app is mostly static/ISR and you're paying dynamic SSR prices
  • You want to run your frontend alongside your backend in the same infrastructure

The Migration Playbook

I've done this three times now. Here's the process I follow, refined through painful experience.

Step 1: Audit Your Next.js Features

Before you touch anything, catalog what Next.js features you actually use:

# Find middleware
find . -name "middleware.ts" -o -name "middleware.js"

# Find API routes
find ./app -name "route.ts" -o -name "route.js" | head -20

# Check for ISR
grep -r "revalidate" ./app --include="*.ts" --include="*.tsx" | head -20

# Check for server actions
grep -r "use server" ./app --include="*.ts" --include="*.tsx" | head -20

# Check next.config for special features
cat next.config.js

Step 2: Choose Your Target

Based on the audit:

  • Heavy ISR + middleware + image optimization → AWS/SST
  • Simple SSR + content site → Cloudflare or VPS
  • Already have Docker/K8s infrastructure → VPS/Docker
  • Need it done by Friday → Docker on Railway or Fly.io

If you're building with Next.js or Astro, the target platform choice significantly impacts your architecture decisions.

Step 3: Set Up CI/CD

Vercel's CI/CD is genuinely great. You'll miss it. Replicate it with GitHub Actions:

# .github/workflows/deploy.yml
name: Deploy
on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: pnpm/action-setup@v4
        with:
          version: 9

      - uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: 'pnpm'

      - run: pnpm install --frozen-lockfile
      - run: pnpm build
      - run: pnpm test

      # For SST:
      - run: npx sst deploy --stage production
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

      # For Docker/VPS (alternative):
      # - run: docker build -t myapp .
      # - run: docker push registry.example.com/myapp:latest
      # - run: ssh deploy@server 'cd /app && docker compose pull && docker compose up -d'

Step 4: Preview Deployments

This is the one thing people miss most from Vercel. For SST, use stages:

# In your PR CI workflow
npx sst deploy --stage pr-${{ github.event.pull_request.number }}

For Docker, tools like Coolify (self-hosted) or Railway handle preview deployments well.

Step 5: DNS Cutover

The actual migration moment. I always recommend:

  1. Deploy to new infrastructure alongside Vercel
  2. Test thoroughly with a staging domain
  3. Lower DNS TTL to 60 seconds a day before
  4. Cut DNS during low-traffic hours
  5. Keep Vercel deployment running for 48 hours as fallback
  6. Monitor error rates, TTFB, and Core Web Vitals closely

Step 6: Tear Down Vercel

Once you're confident (give it at least a week), cancel the Vercel subscription and remove the project. Don't leave zombie projects racking up charges.

Common Pitfalls and How to Avoid Them

Environment variables disappearing. Next.js has NEXT_PUBLIC_ prefixed vars (bundled at build time) and server-only vars (available at runtime). On Vercel, this distinction is somewhat blurred. On self-hosted, it's strict. Make sure all NEXT_PUBLIC_ vars are available at build time in your CI.

ISR cache not persisting. On Docker, the .next/cache directory needs to be on a persistent volume. Otherwise, every container restart loses your cached pages:

# docker-compose.yml
services:
  web:
    build: .
    ports:
      - "3000:3000"
    volumes:
      - next-cache:/app/.next/cache

volumes:
  next-cache:

Sharp installation failures. The sharp image optimization library needs platform-specific binaries. In Docker, make sure you're installing dependencies inside the same architecture as your runtime. The Dockerfile above handles this by using multi-stage builds with the same base image.

Middleware behavior differences. Vercel runs middleware on their edge network. On AWS/SST, it runs as a CloudFront Function (limited to 10ms execution, 2MB size). Complex middleware might need to be moved to the server function. I've had to refactor auth middleware because of these limits.

Missing headers and rewrites. If you were relying on vercel.json for headers, redirects, or rewrites, you need to move these to next.config.js or your CDN/reverse proxy configuration.

If any of this feels overwhelming, that's exactly the kind of infrastructure work we handle at Social Animal. Check our pricing or reach out — we've done these migrations enough times to have a refined process.

FAQ

Is OpenNext production-ready in 2025? Yes. OpenNext v3.x is running production workloads for thousands of companies. The SST/AWS path is the most battle-tested, with Cloudflare support close behind. I wouldn't call Google Cloud or bare Kubernetes support mature yet, but AWS and Cloudflare are solid.

Does OpenNext support Next.js App Router and Server Components? Full support. App Router, Server Components, Server Actions, streaming, and Suspense all work. The OpenNext team tracks Next.js releases closely, though there's typically a 1-3 week lag after major Next.js versions before OpenNext catches up.

How much can I actually save by leaving Vercel? It depends heavily on your usage patterns. For a team of 5 developers running a moderate-traffic app, I've seen teams go from $600-800/month on Vercel Pro to $30-80/month on AWS/SST or under $20/month on a VPS. The savings are real but so is the additional maintenance burden.

Can I use ISR (Incremental Static Regeneration) without Vercel? Absolutely. On AWS/SST, ISR uses DynamoDB for the tag cache and SQS for async revalidation — it's fully functional including on-demand revalidation via revalidateTag() and revalidatePath(). On a VPS, ISR works with the filesystem cache, which is fine for single-server deployments.

What about Vercel's preview deployments? Can I replicate those? You can get 80% of the experience. SST supports stage-based deployments, so each PR can get its own stack. Coolify and similar tools offer preview deployments for Docker-based setups. What you won't easily replicate is Vercel's visual commenting system and the tight GitHub integration for deployment status. Most teams find the tradeoff acceptable.

Should I use OpenNext with Cloudflare or AWS for a headless CMS site? For content-heavy headless CMS sites (Sanity, Contentful, Storyblok), Cloudflare Workers is an excellent choice. These sites tend to be ISR-heavy with relatively light server-side logic — perfect for Cloudflare's pricing model. I'd only go AWS if you need features that Cloudflare doesn't yet support or if you're already deep in the AWS ecosystem.

Is self-hosting Next.js harder than self-hosting Astro or Remix? Honestly? Yes. Next.js has the most complex build output of any framework because of features like ISR, middleware, image optimization, and partial prerendering. Astro and Remix have much simpler deployment stories. If you're starting a new project and self-hosting is a priority, consider Astro — it's dramatically simpler to host. But if you're already on Next.js, OpenNext makes migration practical.

What happens if OpenNext stops being maintained? OpenNext is backed by SST and has an active community with major sponsors. That said, this is a legitimate concern for any open-source dependency. The mitigation is that the Docker/standalone approach (next start) works without OpenNext at all — you just lose some of the more advanced features like ISR tag revalidation and edge middleware. It's a graceful degradation, not a cliff.