Multi-Tenant Next.js with Supabase RLS: Production Guide
Why Supabase RLS for Multi-Tenancy
Let's face it: when it comes to handling multi-tenancy in SaaS apps, you've got options. You can spin up a separate database per tenant, which sounds like organizational nirvana but is expensive and downright painful to manage. Or, you can try using separate schemas, which is less of a hassle operationally but still not a walk in the park when it comes to migrations. But then there's the darling of the SaaS world—shared tables with row-level filtering. Supabase makes this approach a breeze thanks to its PostgreSQL-native Row Level Security (RLS).
Why does that even matter? Simple. Your data filtering happens at the database level. If you screw up on a WHERE clause in your Next.js API route, you're not up at night thinking about data breaches, because the database itself is your safety net. And really, in this day and age, that's not a luxury—it's a necessity.
But let's not kid ourselves. RLS adds overhead to your queries, complicates debugging, and can trip you up during migrations. So, how do different multi-tenancy approaches stack up?
| Approach | Isolation Level | Cost | Operational Complexity | Query Performance |
|---|---|---|---|---|
| Database per tenant | Complete | High ($50-200/tenant/mo) | Very High | Best |
| Schema per tenant | Strong | Medium | High (migrations) | Good |
| Shared tables + RLS | Row-level | Low | Medium | Good (with caveats) |
| Application-level filtering | None | Lowest | Low | Best |
For most SaaS products with fewer than 10,000 tenants, shared tables with RLS gives you the best bang for your buck. It's what we're diving into here.

Architecture Patterns: Shared vs Isolated
Before even thinking about writing code, you've got to pick your tenant resolution strategy. In the wild, you'll mostly run into two beasts:
Subdomain-Based Tenancy
Ever seen tenant-slug.yourapp.com? Welcome to the most common pattern for B2B SaaS. It's slick, professional, and makes tenant resolution in middleware a walk in the park.
Path-Based Tenancy
This one's your basic /org/tenant-slug/dashboard. Easier to set up since there's no wildcard DNS, and it works on platforms like Vercel without custom domains. But let's be honest: it feels a bit like wearing socks with sandals. We usually recommend subdomain-based for production B2B apps and path-based for internal tools or MVPs. Switch later? You'll curse your past self—changing these patterns is no joke.
Setting Up the Tenant Schema
Here's a schema pattern that hasn't let us down in three different production rollouts:
-- Core tenant table
CREATE TABLE organizations (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
slug TEXT UNIQUE NOT NULL,
plan TEXT NOT NULL DEFAULT 'free',
created_at TIMESTAMPTZ DEFAULT now(),
settings JSONB DEFAULT '{}'
);
-- Membership junction table
CREATE TABLE memberships (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES auth.users(id) ON DELETE CASCADE,
org_id UUID REFERENCES organizations(id) ON DELETE CASCADE,
role TEXT NOT NULL DEFAULT 'member' CHECK (role IN ('owner', 'admin', 'member', 'viewer')),
created_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(user_id, org_id)
);
-- Example tenant-scoped table
CREATE TABLE projects (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
org_id UUID REFERENCES organizations(id) ON DELETE CASCADE NOT NULL,
name TEXT NOT NULL,
description TEXT,
created_by UUID REFERENCES auth.users(id),
created_at TIMESTAMPTZ DEFAULT now()
);
-- Index on org_id — you'll need this on EVERY tenant-scoped table
CREATE INDEX idx_projects_org_id ON projects(org_id);
CREATE INDEX idx_memberships_user_id ON memberships(user_id);
CREATE INDEX idx_memberships_org_id ON memberships(org_id);
The memberships table is the glue holding everything together. All your RLS policies will point to it like it's their favorite cousin. Users can join multiple organizations, and their roles dictate what they can or can't do. And here's a little nugget of wisdom: always—seriously, always—index org_id on every tenant-scoped table. Otherwise, watch your queries crawl like molasses once you're swimming in data. We were blindsided by this when a client's dash took a nosedive from 50ms to 8 seconds with 100,000 rows. Lesson learned.
RLS Policies That Actually Scale
Here's where tutorials typically bow out, leaving you stranded. They throw auth.uid() = user_id at you and say, "Good luck!" But multi-tenant RLS can't be boiled down like that.
The Helper Function Pattern
Why clutter every policy with membership checks? Use a helper function instead:
-- Helper: check if current user is a member of an org
CREATE OR REPLACE FUNCTION public.is_member_of(org UUID)
RETURNS BOOLEAN AS $$
SELECT EXISTS (
SELECT 1 FROM memberships
WHERE user_id = auth.uid()
AND org_id = org
);
$$ LANGUAGE sql SECURITY DEFINER STABLE;
-- Helper: get user's role in an org
CREATE OR REPLACE FUNCTION public.get_role_in(org UUID)
RETURNS TEXT AS $$
SELECT role FROM memberships
WHERE user_id = auth.uid()
AND org_id = org
LIMIT 1;
$$ LANGUAGE sql SECURITY DEFINER STABLE;
Why SECURITY DEFINER? Because the function runs with the creator's privileges, skipping RLS on the memberships table. Without this, you risk falling into a circular dependency rabbit hole where RLS on memberships crashes the membership checks other tables rely on.
And the STABLE part? It signals to the query planner that the function's output remains consistent for the same input during a single query, enabling some nice caching benefits. Tempted to use IMMUTABLE? Don't. Membership can flip between transactions.
Policies for Tenant-Scoped Tables
Let's look at some policies for our projects table:
-- Enable RLS
ALTER TABLE projects ENABLE ROW LEVEL SECURITY;
-- SELECT: members can view projects in their orgs
CREATE POLICY "Members can view org projects"
ON projects FOR SELECT
USING (public.is_member_of(org_id));
-- INSERT: admins and owners can create projects
CREATE POLICY "Admins can create projects"
ON projects FOR INSERT
WITH CHECK (
public.get_role_in(org_id) IN ('owner', 'admin')
);
-- UPDATE: admins and owners can update projects
CREATE POLICY "Admins can update projects"
ON projects FOR UPDATE
USING (public.is_member_of(org_id))
WITH CHECK (
public.get_role_in(org_id) IN ('owner', 'admin')
);
-- DELETE: only owners can delete projects
CREATE POLICY "Owners can delete projects"
ON projects FOR DELETE
USING (
public.get_role_in(org_id) = 'owner'
);
Policies for the Memberships Table Itself
This one's tricky. The memberships table gets its own RLS, but it can't use the helper functions because they, in turn, query memberships—cue circular reference nightmares:
ALTER TABLE memberships ENABLE ROW LEVEL SECURITY;
-- Users can see memberships in orgs they belong to
CREATE POLICY "Users can view org memberships"
ON memberships FOR SELECT
USING (
org_id IN (
SELECT org_id FROM memberships WHERE user_id = auth.uid()
)
);
-- Only owners can add members
CREATE POLICY "Owners can add members"
ON memberships FOR INSERT
WITH CHECK (
org_id IN (
SELECT org_id FROM memberships
WHERE user_id = auth.uid() AND role = 'owner'
)
);
Yes, there's a subquery on the same table. And yes, PostgreSQL nails it. The subquery checks your own membership, unaffected by the policy being defined since RLS only wraps around the outer query. But test this—seriously, you don’t want to find a bug in production.

Next.js Middleware for Tenant Resolution
With Next.js 15 and the spiffy App Router, middleware running at the edge is the perfect landlord for tenant resolution. Here's our trusty pattern for subdomain-based setups:
// middleware.ts
import { createServerClient } from '@supabase/ssr';
import { NextResponse } from 'next/server';
import type { NextRequest } from 'next/server';
const PUBLIC_ROUTES = ['/login', '/signup', '/invite'];
export async function middleware(request: NextRequest) {
const hostname = request.headers.get('host') || '';
const currentHost = hostname.split('.')[0];
// Skip for main domain and localhost
const isMainDomain = currentHost === 'app' || currentHost === 'www' || currentHost === 'localhost:3000';
let response = NextResponse.next({
request: { headers: request.headers },
});
const supabase = createServerClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
{
cookies: {
getAll() {
return request.cookies.getAll();
},
setAll(cookiesToSet) {
cookiesToSet.forEach(({ name, value, options }) => {
request.cookies.set(name, value);
response.cookies.set(name, value, options);
});
},
},
}
);
const { data: { user } } = await supabase.auth.getUser();
if (!isMainDomain) {
response.headers.set('x-tenant-slug', currentHost);
if (!user && !PUBLIC_ROUTES.some(r => request.nextUrl.pathname.startsWith(r))) {
return NextResponse.redirect(new URL('/login', request.url));
}
}
return response;
}
export const config = {
matcher: ['/((?!_next/static|_next/image|favicon.ico|api/webhooks).*)'],
};
The x-tenant-slug header is pure gold. Use it to let your Server Components and API routes know which tenant they're dealing with. If you're collaborating with us on a Next.js project, setting this up is our day one priority.
Authentication Flow in Multi-Tenant Apps
Supabase Auth plays it neutral in the multi-tenancy game. Users exist in a global sphere—tenant relations are your puzzle to solve. Here’s our game plan:
- User signs up: Create an auth user, build an organization, and conjure up a membership with an 'owner' role.
- User is invited: The admin draws up a pending invite, a new user joins via the invite link, and poof—a membership appears with the specified role.
- User logs in: Extract tenant from subdomain, confirm membership, escort them to their dashboard.
// app/api/auth/signup/route.ts
import { createClient } from '@/lib/supabase/server';
import { NextResponse } from 'next/server';
export async function POST(request: Request) {
const { email, password, orgName, orgSlug } = await request.json();
const supabase = await createClient();
// Sign up the user
const { data: authData, error: authError } = await supabase.auth.signUp({
email,
password,
});
if (authError) return NextResponse.json({ error: authError.message }, { status: 400 });
// Use a service role client for org creation (bypasses RLS)
const adminClient = createAdminClient();
const { data: org, error: orgError } = await adminClient
.from('organizations')
.insert({ name: orgName, slug: orgSlug })
.select()
.single();
if (orgError) return NextResponse.json({ error: orgError.message }, { status: 400 });
// Create ownership membership
await adminClient
.from('memberships')
.insert({
user_id: authData.user!.id,
org_id: org.id,
role: 'owner',
});
return NextResponse.json({ org });
}
Notice we rely on a service role client during signup. The user hasn’t got any memberships yet, so RLS would leave them in the lurch for organization creation. It's one of those classic bootstrapping issues—your service role key will be your magic wand.
And I can't stress this enough: Never, ever expose your service role key to the client. It's strictly for server-side code.
Server Components and RLS: The SSR Problem
Next.js 15's Server Components are server-bound, upping the security game. But there's a hiccup when using Supabase RLS: you have to provide the user's session to the Supabase client for RLS policies to know who’s at the table.
// lib/supabase/server.ts
import { createServerClient } from '@supabase/ssr';
import { cookies } from 'next/headers';
export async function createClient() {
const cookieStore = await cookies();
return createServerClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
{
cookies: {
getAll() {
return cookieStore.getAll();
},
setAll(cookiesToSet) {
try {
cookiesToSet.forEach(({ name, value, options }) =>
cookieStore.set(name, value, options)
);
} catch {
// This can fail in Server Components (read-only)
// The middleware handles cookie refreshing
}
},
},
}
);
}
// app/[orgSlug]/projects/page.tsx
import { createClient } from '@/lib/supabase/server';
import { headers } from 'next/headers';
export default async function ProjectsPage() {
const supabase = await createClient();
const headersList = await headers();
const tenantSlug = headersList.get('x-tenant-slug');
// Get the org ID from slug
const { data: org } = await supabase
.from('organizations')
.select('id')
.eq('slug', tenantSlug)
.single();
if (!org) return <div>Organization not found</div>;
// RLS automatically filters — only returns projects
// where the current user has membership
const { data: projects } = await supabase
.from('projects')
.select('*')
.eq('org_id', org.id)
.order('created_at', { ascending: false });
return (
<div>
{projects?.map(project => (
<ProjectCard key={project.id} project={project} />
))}
</div>
);
}
Here’s the kicker: even if someone fudges the org_id in the request, RLS won’t budge. It blocks access to projects unless the user is a member. Technically, .eq('org_id', org.id) is redundant for security—RLS handles that—but it's good for performance and readability.
Performance Optimization and Common Pitfalls
The N+1 RLS Query Problem
Every RLS policy check spins up a subquery. Hooking into a 10-row policy check when you're eyeing 100 rows means 100 rounds of membership lookup. Luckily, PostgreSQL is smart enough to cache—but there's overhead.
Fix: Use STABLE on helper functions (like we outlined). Also, think about denormalizing org_id into the JWT claims:
-- Custom JWT hook (Supabase Dashboard > Auth > Hooks)
CREATE OR REPLACE FUNCTION public.custom_access_token_hook(event JSONB)
RETURNS JSONB AS $$
DECLARE
org_ids UUID[];
BEGIN
SELECT array_agg(org_id) INTO org_ids
FROM memberships
WHERE user_id = (event->>'user_id')::UUID;
event := jsonb_set(
event,
'{claims,org_ids}',
to_jsonb(org_ids)
);
RETURN event;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
Then your RLS policy becomes:
CREATE POLICY "Members can view"
ON projects FOR SELECT
USING (
org_id = ANY(
(SELECT array(SELECT jsonb_array_elements_text(
auth.jwt()->'org_ids'
))::UUID[])
)
);
This squashes the membership table lookup entirely. The org IDs hail straight from the JWT. Caveat: JWT claims are stamped at login. Change someone's membership, and they’ll need a re-auth to sync claims. Typically, that's totally manageable—just keep it in your docs.
Connection Pooling
Supabase dishes out connection pooling through PgBouncer. If you’re going live with Next.js on Vercel, remember: pooler URL for API routes and server components.
# For regular operations (pooled)
DATABASE_URL=postgres://user:pass@db.project.supabase.co:6543/postgres
# For migrations only (direct)
DIRECT_URL=postgres://user:pass@db.project.supabase.co:5432/postgres
Anyone on Supabase's Pro for $25 per month gets 200 concurrent connections via the pooler. For most SaaS apps shy of 1000 users concurrently, it’s more than enough.
Indexes You Absolutely Need
Here's the brute-force index set for a multi-tenant setup:
-- On every tenant-scoped table
CREATE INDEX idx_{table}_org_id ON {table}(org_id);
-- Composite indexes for common queries
CREATE INDEX idx_projects_org_created ON projects(org_id, created_at DESC);
-- Memberships — heavily queried by RLS
CREATE INDEX idx_memberships_user_org ON memberships(user_id, org_id);
CREATE INDEX idx_memberships_org_role ON memberships(org_id, role);
EXPLAIN ANALYZE—a developer's best friend. See how your queries fare with RLS aboard. You might get a rude awakening on what the planner decides to do sans the right indexes.
Testing RLS Policies
Everyone skips over this, yet it’s your best safety net against data leaks. We test RLS policies straight in SQL:
-- Test as a specific user
SET request.jwt.claims = '{"sub": "user-uuid-here", "role": "authenticated"}';
SET role = 'authenticated';
-- This should return only projects the user has access to
SELECT * FROM projects;
-- This should fail (user not a member of this org)
INSERT INTO projects (org_id, name) VALUES ('other-org-uuid', 'Sneaky Project');
-- Reset
RESET role;
And let’s not forget pgTAP for critical policies:
BEGIN;
SELECT plan(3);
-- Set up test context as user A (member of org 1)
SET LOCAL request.jwt.claims = '{"sub": "user-a-uuid"}';
SET LOCAL role = 'authenticated';
SELECT is(
(SELECT count(*) FROM projects WHERE org_id = 'org-1-uuid')::INTEGER,
5,
'User A sees 5 projects in their org'
);
SELECT is(
(SELECT count(*) FROM projects WHERE org_id = 'org-2-uuid')::INTEGER,
0,
'User A sees 0 projects in other org'
);
SELECT throws_ok(
$$INSERT INTO projects (org_id, name) VALUES ('org-2-uuid', 'Hack')$$,
'new row violates row-level security policy',
'User A cannot insert into other org'
);
SELECT * FROM finish();
ROLLBACK;
Run these in CI. Every migration playing with RLS policies should send the full test suite through a vigorous workout.
Production Deployment Checklist
Ready to ship? Brace yourself with this:
- RLS enabled on every table cradling tenant data
- Service role key hoarded server-side, nowhere near a client
-
org_idproperly indexed on all tenant-scoped tables - Membership helper functions knighted as
SECURITY DEFINERandSTABLE - JWT custom claims locked and loaded (if on the JWT route)
- Got connection pooling hooked up for cloud deployment?
- RLS policies fresh out of QA testing with pgTAP or its kin
- Cranked up
EXPLAIN ANALYZEon crucial queries with RLS running - Invite/signup flow not missing any membership bootstraps
- Any rate limiting on auth endpoints? Supabase offers baked-in options
- Flip the switch on RLS for
authschema tables in the Supabase Dashboard (often a landmine) - Embedded monitoring for any slow queries (Supabase Dashboard > Database > Query Performance)
Launching a multi-tenant product and want someone who's waded through these waters before? Our headless CMS development solutions or a quick chat through our contact page might just be what you need.
FAQ
Can I use Supabase RLS for apps with thousands of tenants?
Absolutely. We've piloted shared-table RLS with 5,000+ tenants and millions of rows without breaking a sweat. The secret sauce? Proper indexing on org_id columns and STABLE helper functions. Considering 50,000+ tenants or billion-row extravaganzas? Dive into partitioning tables by org_id or flirt with a schema-per-tenant setup.
How do I handle tenant switching when a user belongs to multiple organizations? Keep the active organization tucked in a cookie or URL (subdomain). Swap orgs? Tweak the subdomain/cookie and fetch anew. Don’t tuck the active org into the JWT — it demands a relog to change. A cookie your middleware can peep at is the way to go.
What happens if I forget to enable RLS on a table?
Every authenticated user could tap into every row. That's PostgreSQL's default stance—no row-level restraints on tables without RLS. Supabase Dashboard flags tables missing RLS, but embedding this in CI with queries to pg_tables and pg_policies helps too.
Should I use Supabase's service role key or cook up a custom PostgreSQL role for admin duties? Mostly, the service role key suffices. It skirts RLS entirely, so it's your top secret for server-side usage only. Need granular governance (like an "admin" role lurking in all orgs but shy of deletions)? That’s custom PostgreSQL territory—advanced and generally off your radar until complex internal tooling demands it.
How do I run database migrations without tripping over RLS policies?
Supabase’s CLI (supabase db push or supabase migration) alongside the direct database URL (skipping pooled) has your back. Tuck RLS policy edits into the same migration as schema tweaks. Test cast migrations against a staging project—Supabase lets you spin up preview branches on Pro for just this sort of thing.
Can RLS policies reach out to data from other APIs or services?
Nope. RLS policies sit snug in SQL, evaluated by PostgreSQL. Fancy checking on external data (like a feature flag service)? Cement that data into a database table, then reference in your policy. A typical pattern is syncing subscription statuses from Stripe to an organizations.plan column.
What's the performance tax of RLS compared to filtering at the application layer? Over in our Supabase Pro benchmarks (2 vCPUs, 8GB RAM), RLS slathers an extra 1-3ms per query for basic membership-check policies with the right indexes. Go wild with policy complexity or joins and you might add 5-15ms. The JWT claims tactic (storing org_ids in the token) slices it below 1ms since there’s no subquery dance. For typical web apps, that trickle of latency is negligible.
How does this work with Supabase Realtime subscriptions?
Supabase Realtime plays by the RLS rulebook. Tune in to table changes and catch only events of rows you’re eligible to see according to RLS. This rolls out of the box with zero extra tinkering. Just be sure your client-side Supabase has the user session, which @supabase/ssr seamlessly handles.