I've built three auction systems over the past two years. The first one was a mess — polling the database every second, race conditions everywhere, bids disappearing into the void. The second was better but required managing a separate WebSocket server alongside the main API. The third? That's the one I'm going to walk you through. It uses Supabase Realtime, and it's the first time building a bidding engine actually felt right.

Supabase Realtime sits on top of PostgreSQL's Write-Ahead Log (WAL) and uses an Elixir-based server to push database changes over WebSockets to connected clients. For an auction system, this means every bid that hits your database instantly propagates to every bidder watching that auction. No polling. No separate pub/sub infrastructure. Your database is your event system.

Let's build one from scratch.

Table of Contents

Architecture Overview

Before writing any code, let's understand what we're building and how the pieces fit together.

Supabase Realtime gives you three primitives that map perfectly onto auction requirements:

  • Postgres Changes: Subscribe to INSERT, UPDATE, and DELETE events on your bid and auction tables. When someone places a bid, every subscriber gets the new row data within milliseconds.
  • Broadcast: Send ephemeral messages to channel participants. Perfect for "you've been outbid" notifications that don't need to be persisted.
  • Presence: Track who's currently watching an auction. This lets you show "14 bidders watching" in your UI and detect ghost sessions.

The data flow looks like this:

  1. Bidder submits a bid through your frontend
  2. An RPC call or direct insert hits your bids table
  3. A PostgreSQL trigger validates the bid amount and updates auctions.current_high_bid
  4. Supabase Realtime picks up the WAL change and pushes it to all subscribers on that auction's channel
  5. A second trigger fires a Broadcast event to notify the previous high bidder they've been outbid
  6. Every connected client updates their UI in real time

The latency from bid placement to UI update across all clients is typically under 100ms. I've measured p99 at around 80-90ms in production on Supabase's Pro tier.

Why Not Just Use Polling?

I know some of you are thinking "can't I just poll every 500ms?" You can. But at 200 concurrent bidders on a single auction, that's 400 requests per second hitting your database for one auction. Multiply that by 50 active auctions and you're at 20,000 queries per second — most of which return nothing new. WebSockets flip this model: zero queries when nothing changes, instant updates when something does.

Database Schema and Setup

Here's the schema I use. It's deliberately simple — you can extend it, but the core structure handles most auction types.

-- Auctions table
CREATE TABLE auctions (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  item_name TEXT NOT NULL,
  description TEXT,
  starting_price DECIMAL(12,2) NOT NULL DEFAULT 0,
  current_high_bid DECIMAL(12,2) DEFAULT 0,
  highest_bidder_id UUID REFERENCES auth.users(id),
  min_increment DECIMAL(12,2) DEFAULT 1.00,
  status TEXT NOT NULL DEFAULT 'active'
    CHECK (status IN ('scheduled', 'active', 'ended', 'sold', 'cancelled')),
  starts_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
  ends_at TIMESTAMPTZ NOT NULL DEFAULT NOW() + INTERVAL '30 minutes',
  created_at TIMESTAMPTZ DEFAULT NOW(),
  updated_at TIMESTAMPTZ DEFAULT NOW()
);

-- Bids table
CREATE TABLE bids (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  auction_id UUID NOT NULL REFERENCES auctions(id) ON DELETE CASCADE,
  user_id UUID NOT NULL REFERENCES auth.users(id),
  amount DECIMAL(12,2) NOT NULL,
  placed_at TIMESTAMPTZ DEFAULT NOW(),
  CONSTRAINT positive_amount CHECK (amount > 0)
);

-- Index for fast bid lookups per auction
CREATE INDEX idx_bids_auction_amount ON bids(auction_id, amount DESC);
CREATE INDEX idx_bids_auction_time ON bids(auction_id, placed_at DESC);

-- Critical: enable replica identity for Realtime
ALTER TABLE auctions REPLICA IDENTITY FULL;
ALTER TABLE bids REPLICA IDENTITY FULL;

The REPLICA IDENTITY FULL setting is essential. Without it, Supabase Realtime only gets the primary key on UPDATE and DELETE events — not the full row data. For an auction system, you need the full payload so clients can update bid amounts without making a separate query.

Enabling Realtime Replication

In the Supabase Dashboard, go to Database → Replication and toggle replication on for both the auctions and bids tables. Alternatively, you can do this with SQL:

BEGIN;
  -- Remove existing publication if it exists
  DROP PUBLICATION IF EXISTS supabase_realtime;
  
  -- Create publication with both tables
  CREATE PUBLICATION supabase_realtime FOR TABLE auctions, bids;
COMMIT;

Row-Level Security

Don't skip this. RLS is your server-side validation layer.

ALTER TABLE auctions ENABLE ROW LEVEL SECURITY;
ALTER TABLE bids ENABLE ROW LEVEL SECURITY;

-- Anyone can view active auctions
CREATE POLICY "Public auction viewing" ON auctions
  FOR SELECT USING (status IN ('active', 'ended', 'sold'));

-- Authenticated users can view all bids on active auctions
CREATE POLICY "View bids on active auctions" ON bids
  FOR SELECT USING (
    EXISTS (
      SELECT 1 FROM auctions
      WHERE auctions.id = bids.auction_id
      AND auctions.status = 'active'
    )
  );

-- Only authenticated users can place bids
CREATE POLICY "Place bids" ON bids
  FOR INSERT WITH CHECK (
    auth.uid() = user_id
    AND EXISTS (
      SELECT 1 FROM auctions
      WHERE auctions.id = auction_id
      AND auctions.status = 'active'
      AND auctions.ends_at > NOW()
    )
  );

PostgreSQL Triggers for Bid Logic

This is where the real magic happens. The database enforces all bid logic server-side — the client can't cheat.

Bid Validation and Auction Update Trigger

CREATE OR REPLACE FUNCTION process_new_bid()
RETURNS TRIGGER AS $$
DECLARE
  v_auction auctions%ROWTYPE;
BEGIN
  -- Lock the auction row to prevent race conditions
  SELECT * INTO v_auction
  FROM auctions
  WHERE id = NEW.auction_id
  FOR UPDATE;

  -- Validate auction is active
  IF v_auction.status != 'active' THEN
    RAISE EXCEPTION 'Auction is not active';
  END IF;

  -- Validate auction hasn't ended
  IF v_auction.ends_at < NOW() THEN
    RAISE EXCEPTION 'Auction has ended';
  END IF;

  -- Validate bid amount exceeds current high + minimum increment
  IF NEW.amount < v_auction.current_high_bid + v_auction.min_increment THEN
    RAISE EXCEPTION 'Bid must be at least % higher than current high bid of %',
      v_auction.min_increment, v_auction.current_high_bid;
  END IF;

  -- Prevent self-outbidding
  IF v_auction.highest_bidder_id = NEW.user_id THEN
    RAISE EXCEPTION 'You are already the highest bidder';
  END IF;

  -- Update auction with new high bid
  UPDATE auctions
  SET
    current_high_bid = NEW.amount,
    highest_bidder_id = NEW.user_id,
    updated_at = NOW()
  WHERE id = NEW.auction_id;

  RETURN NEW;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;

CREATE TRIGGER validate_and_process_bid
  BEFORE INSERT ON bids
  FOR EACH ROW
  EXECUTE FUNCTION process_new_bid();

That FOR UPDATE lock on the auction row is critical. Without it, two bids arriving simultaneously could both read the same current_high_bid, both pass validation, and both get inserted. The lock serializes access.

Broadcast Outbid Notifications

This trigger fires after a successful bid and sends an ephemeral notification to the auction channel:

CREATE OR REPLACE FUNCTION notify_outbid()
RETURNS TRIGGER AS $$
DECLARE
  v_previous_bidder UUID;
BEGIN
  -- Find who just got outbid
  SELECT user_id INTO v_previous_bidder
  FROM bids
  WHERE auction_id = NEW.auction_id
    AND id != NEW.id
  ORDER BY amount DESC
  LIMIT 1;

  -- Broadcast outbid notification if there was a previous bidder
  IF v_previous_bidder IS NOT NULL THEN
    PERFORM realtime.send(
      jsonb_build_object(
        'auction_id', NEW.auction_id,
        'new_high', NEW.amount,
        'outbid_user', v_previous_bidder,
        'new_leader', NEW.user_id
      ),
      'outbid',
      'auction:' || NEW.auction_id::text,
      true
    );
  END IF;

  RETURN NULL;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;

CREATE TRIGGER after_bid_notify
  AFTER INSERT ON bids
  FOR EACH ROW
  EXECUTE FUNCTION notify_outbid();

Client-Side Subscription with JavaScript

Now let's wire up the frontend. I'll show this with vanilla JavaScript/React patterns — the same approach works if you're building with Next.js or any other framework.

Initialize the Client

import { createClient } from '@supabase/supabase-js';

const supabase = createClient(
  process.env.NEXT_PUBLIC_SUPABASE_URL,
  process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY,
  {
    realtime: {
      params: {
        eventsPerSecond: 20 // Throttle for auction traffic
      }
    }
  }
);

That eventsPerSecond parameter matters. On a hot auction with dozens of bids per second, you don't want to re-render 50 times a second. Twenty updates per second is more than enough for a smooth UI.

Subscribe to an Auction Channel

function subscribeToAuction(auctionId, callbacks) {
  const channel = supabase.channel(`auction:${auctionId}`);

  channel
    // Listen for new bids via Postgres Changes
    .on('postgres_changes', {
      event: 'INSERT',
      schema: 'public',
      table: 'bids',
      filter: `auction_id=eq.${auctionId}`
    }, (payload) => {
      callbacks.onNewBid(payload.new);
    })

    // Listen for auction status changes
    .on('postgres_changes', {
      event: 'UPDATE',
      schema: 'public',
      table: 'auctions',
      filter: `id=eq.${auctionId}`
    }, (payload) => {
      callbacks.onAuctionUpdate(payload.new);
    })

    // Listen for outbid broadcast notifications
    .on('broadcast', { event: 'outbid' }, ({ payload }) => {
      callbacks.onOutbid(payload);
    })

    // Track active bidders via Presence
    .on('presence', { event: 'sync' }, () => {
      const state = channel.presenceState();
      const bidderCount = Object.keys(state).length;
      callbacks.onPresenceUpdate(bidderCount, state);
    })

    .subscribe(async (status) => {
      if (status === 'SUBSCRIBED') {
        // Track this user's presence
        await channel.track({
          user_id: supabase.auth.getUser()?.data?.user?.id,
          status: 'watching',
          joined_at: new Date().toISOString()
        });
      }
    });

  return channel;
}

React Hook for Auction Subscriptions

import { useState, useEffect, useCallback } from 'react';

function useAuction(auctionId) {
  const [auction, setAuction] = useState(null);
  const [bids, setBids] = useState([]);
  const [bidderCount, setBidderCount] = useState(0);
  const [isOutbid, setIsOutbid] = useState(false);

  useEffect(() => {
    // Fetch initial state
    async function loadAuction() {
      const { data: auctionData } = await supabase
        .from('auctions')
        .select('*')
        .eq('id', auctionId)
        .single();
      setAuction(auctionData);

      const { data: bidData } = await supabase
        .from('bids')
        .select('*')
        .eq('auction_id', auctionId)
        .order('amount', { ascending: false })
        .limit(20);
      setBids(bidData || []);
    }
    loadAuction();

    // Subscribe to real-time updates
    const channel = subscribeToAuction(auctionId, {
      onNewBid: (bid) => {
        setBids(prev => [bid, ...prev].slice(0, 20));
        setIsOutbid(false);
      },
      onAuctionUpdate: (updated) => setAuction(updated),
      onOutbid: (payload) => {
        const currentUser = supabase.auth.getUser()?.data?.user;
        if (payload.outbid_user === currentUser?.id) {
          setIsOutbid(true);
        }
      },
      onPresenceUpdate: (count) => setBidderCount(count)
    });

    return () => {
      supabase.removeChannel(channel);
    };
  }, [auctionId]);

  const placeBid = useCallback(async (amount) => {
    const user = (await supabase.auth.getUser()).data.user;
    const { data, error } = await supabase
      .from('bids')
      .insert({
        auction_id: auctionId,
        amount: parseFloat(amount),
        user_id: user.id
      })
      .select()
      .single();

    if (error) throw new Error(error.message);
    return data;
  }, [auctionId]);

  return { auction, bids, bidderCount, isOutbid, placeBid };
}

Handling Race Conditions and Bid Validation

Race conditions are the single biggest source of bugs in auction systems. Here's how I handle them.

Server-Side: PostgreSQL Does the Heavy Lifting

The SELECT ... FOR UPDATE in our trigger function is the first line of defense. But there's another pattern I've started using — advisory locks for high-contention auctions:

CREATE OR REPLACE FUNCTION place_bid_safe(
  p_auction_id UUID,
  p_user_id UUID,
  p_amount DECIMAL
)
RETURNS TABLE(bid_id UUID, new_high DECIMAL) AS $$
DECLARE
  v_lock_key BIGINT;
  v_bid_id UUID;
BEGIN
  -- Generate a deterministic lock key from auction UUID
  v_lock_key := ('x' || left(p_auction_id::text, 15))::bit(64)::bigint;
  
  -- Acquire advisory lock (blocks concurrent bids on same auction)
  PERFORM pg_advisory_xact_lock(v_lock_key);

  -- Now safe to insert (trigger handles validation)
  INSERT INTO bids (auction_id, user_id, amount)
  VALUES (p_auction_id, p_user_id, p_amount)
  RETURNING id INTO v_bid_id;

  RETURN QUERY
  SELECT v_bid_id, p_amount;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;

Call this from the client using Supabase's RPC:

const { data, error } = await supabase.rpc('place_bid_safe', {
  p_auction_id: auctionId,
  p_user_id: user.id,
  p_amount: bidAmount
});

Client-Side: Optimistic UI with Rollback

Show the bid immediately in the UI, but be ready to roll it back if the server rejects it:

async function handleBidSubmit(amount) {
  const optimisticBid = {
    id: crypto.randomUUID(),
    amount,
    user_id: user.id,
    placed_at: new Date().toISOString(),
    _optimistic: true
  };

  // Show immediately
  setBids(prev => [optimisticBid, ...prev]);

  try {
    await placeBid(amount);
    // Real bid will arrive via Realtime and replace optimistic one
  } catch (err) {
    // Remove optimistic bid on failure
    setBids(prev => prev.filter(b => b.id !== optimisticBid.id));
    showError(err.message);
  }
}

Presence Tracking for Active Bidders

Showing how many people are watching an auction creates urgency. Presence tracking is dead simple with Supabase:

// Update user status when they start bidding
async function updatePresenceStatus(channel, status) {
  await channel.track({
    user_id: user.id,
    status, // 'watching', 'bidding', 'won'
    last_active: new Date().toISOString()
  });
}

On the display side, you can break down the presence state to show how many are actively bidding vs. just watching:

function parseBidderStats(presenceState) {
  const users = Object.values(presenceState).flat();
  return {
    total: users.length,
    bidding: users.filter(u => u.status === 'bidding').length,
    watching: users.filter(u => u.status === 'watching').length
  };
}

Performance Tuning and Production Considerations

Throttling and Debouncing

A bidding war can generate dozens of events per second. Here's what I configure:

  • Server-side: eventsPerSecond: 20 on the Supabase client config
  • Client-side: Debounce the bid button at 300ms to prevent double-clicks
  • UI updates: Use requestAnimationFrame for bid list animations

Auction End Timing

Don't trust the client clock. Use a PostgreSQL cron job via pg_cron:

-- Run every 10 seconds to close expired auctions
SELECT cron.schedule(
  'close-expired-auctions',
  '*/10 * * * * *',
  $$
  UPDATE auctions
  SET status = CASE
    WHEN highest_bidder_id IS NOT NULL THEN 'sold'
    ELSE 'ended'
  END
  WHERE status = 'active'
  AND ends_at <= NOW();
  $$
);

Anti-Snipe Extension

Most auction platforms extend the deadline if a bid comes in during the last few seconds:

-- Add to the process_new_bid trigger
IF v_auction.ends_at - NOW() < INTERVAL '30 seconds' THEN
  UPDATE auctions
  SET ends_at = ends_at + INTERVAL '30 seconds'
  WHERE id = NEW.auction_id;
END IF;

Supabase Realtime vs Alternatives

I've used most of these in production. Here's an honest comparison:

Feature Supabase Realtime Pusher Ably Firebase RTDB Socket.io (self-hosted)
Native DB sync ✅ PostgreSQL WAL ❌ Separate service ❌ Separate service ✅ JSON tree ❌ Manual
Latency (p99) ~80-100ms ~60ms ~50ms ~100ms ~40ms (depends on infra)
Max events/sec 200k+ 10k (Pro) 50k 100k Unlimited (you scale it)
Auth integration Built-in (RLS + JWT) Custom Token-based Firebase Auth Custom
Presence ✅ Built-in ✅ Built-in ✅ Built-in ✅ Built-in ✅ Built-in
Free tier 500K MAU, 200 concurrent 100 connections 6M msgs/mo 1GB stored $0 (hosting costs)
Pro pricing $25/mo $49/mo $29/mo Pay-as-you-go ~$100-500/mo (AWS)
Best for DB-centric real-time apps Simple pub/sub High reliability Mobile apps Full control

For an auction system specifically, Supabase wins because your bids are already in PostgreSQL. You don't need to sync between a database and a separate pub/sub system. The bid hits the DB, the DB triggers the WebSocket push. One source of truth.

If you're building on a headless CMS architecture, Supabase fits naturally alongside content delivery without adding another service to manage.

Deploying and Scaling Your Auction System

For most projects, Supabase's managed Pro tier at $25/month handles up to 10,000 daily auctions comfortably. Here's what to watch:

  • Connection limits: Pro tier gives you 500 concurrent Realtime connections. If you need more, you'll need to upgrade or implement connection pooling on the client.
  • WAL size: High-volume bidding generates significant WAL traffic. Monitor your replication slot to avoid disk bloat.
  • Channel count: Each auction gets its own channel. With thousands of active auctions, test that your client properly unsubscribes from ended auctions.

For a frontend built with Astro or Next.js, the Supabase JS client works identically — just make sure you're initializing it client-side for Realtime subscriptions.

If you're building something that needs to handle serious scale — hundreds of thousands of concurrent bidders — reach out to us. We've architected these systems at scale and can help you avoid the pitfalls. You can also check our pricing page for project-based engagements.

FAQ

How many concurrent bidders can Supabase Realtime handle? Supabase Realtime can handle over 200,000 events per second across distributed servers on their managed platform. The Pro tier at $25/month supports up to 500 concurrent connections per project. For larger auctions, the Enterprise tier offers custom limits, or you can self-host the Realtime server (it's open source) on your own infrastructure.

Is Supabase Realtime fast enough for a live auction? Yes. In my testing, the end-to-end latency from bid insertion to client notification averages around 50-80ms, with p99 under 100ms. For context, a human reaction time is about 200-300ms, so bids appear effectively instantaneous. The bottleneck is rarely Supabase — it's usually the client's network connection.

How do I prevent race conditions when two people bid simultaneously? Use PostgreSQL's SELECT ... FOR UPDATE row-level locking inside a trigger function, or use advisory locks via pg_advisory_xact_lock(). This serializes bid processing per auction so only one bid is validated at a time. The "losing" bid still gets validated — it just sees the updated high bid from the winner and either succeeds (if it's still higher) or fails with an appropriate error.

Can I use Supabase Realtime with Next.js or Astro? Absolutely. The @supabase/supabase-js client works in any JavaScript environment. For Next.js, initialize the Supabase client in a client component (since Realtime needs browser WebSockets) and use it inside useEffect hooks. For Astro, use it in client-side interactive islands. The subscription code is identical regardless of your framework choice.

What happens if a user's connection drops mid-auction? Supabase Realtime automatically attempts reconnection. When the client reconnects and resubscribes, it receives the current state. For critical auctions, I recommend also fetching the latest auction state via a standard query on reconnection to ensure nothing was missed during the disconnection window. The Presence system will automatically remove the disconnected user after a timeout.

How do I handle auction end times accurately? Never rely on client-side timers for auction end times — they can be manipulated. Use PostgreSQL's pg_cron extension to check for and close expired auctions every 10 seconds server-side. Send the server timestamp to clients so they can display a countdown, but the actual end determination always happens in the database.

Is Supabase Realtime free for small projects? Supabase's free tier includes Realtime with up to 200 concurrent connections and 500,000 monthly active users. That's enough for a hobby auction site or an MVP. If you're running a production auction platform with meaningful traffic, the Pro tier at $25/month with $0.09/GB egress is where you'll want to start. It's significantly cheaper than running your own WebSocket infrastructure.

How do I test a real-time bidding system locally? Use the Supabase CLI (supabase start) to run a local Supabase instance with Realtime enabled. Open multiple browser tabs to simulate multiple bidders. For load testing, I use a simple Node.js script that creates 100+ Supabase clients and has them bid against each other on a timer. This catches race conditions and helps you tune your eventsPerSecond parameter before going to production.