Background
When we started building Spindare's social feed, the architecture decision I spent the most time on was real-time data. A social feed lives or dies on latency. If a user posts a challenge and their friends see it 30 seconds later, that's not a social feed — that's email.
I evaluated Firebase Realtime Database and Supabase Realtime over about two weeks. Here's what I found.
The Test Setup
I built the same basic feed in both — a list of posts, each with like counts, comment counts, and a timestamp. New posts should appear at the top in real time. Like counts should update live.
Simulated load: 500 concurrent WebSocket connections, each subscribed to the same feed channel, with 50 writes per second (mix of new posts and like updates).
I ran this from a $5 DigitalOcean droplet using k6 for load generation.
Firebase: What Surprised Me
Firebase's Realtime Database handled the WebSocket connections easily. Connection latency was low, and the SDK's automatic reconnect logic was solid. Under 500 concurrent connections, writes propagated in under 200ms consistently.
Where it fell apart: data modeling.
Firebase's document model pushed me toward denormalizing aggressively. To show a post with author info, like count, and comment count in real time, I ended up with a structure where updates to any of those three things triggered re-renders of the entire post object. At 50 writes/second with 500 listeners, that's 25,000 event callbacks per second on the client side.
The SDK batches these, but the result was visible jank on the feed scroll whenever like storms happened (lots of users liking the same post at once). The feed would stutter as the reconciler processed the burst.
Firestore handled this better with its granular field-level update model, but Firestore's real-time pricing at scale gets scary fast.
Supabase Realtime: What Worked
Supabase Realtime uses Postgres logical replication under the hood, which means you're subscribing to actual database changes — row-level, column-level if you want. This maps much better to a social feed's actual data model.
supabase
.channel('feed')
.on(
'postgres_changes',
{ event: 'INSERT', schema: 'public', table: 'posts' },
(payload) => insertPost(payload.new)
)
.on(
'postgres_changes',
{ event: 'UPDATE', schema: 'public', table: 'posts', filter: 'id=eq.' + postId },
(payload) => updateLikeCount(payload.new.like_count)
)
.subscribe();Granular subscriptions meant like count updates only triggered re-renders for the specific post being liked, not the entire feed. Under the same 500-connection / 50-writes-per-second load, the feed stayed smooth.
Where Supabase Falls Short
Connection limits. Supabase's free tier caps you at 200 concurrent Realtime connections. The Pro plan raises this to 500. For a scaling social app, you'll hit this fast.
The workaround: use Realtime only for critical live updates (new posts appearing, notification badges) and poll for less-time-sensitive data (like counts on older posts). We implemented a hybrid — Realtime for new posts and notifications, 30-second polling for like/comment counts on posts older than 2 minutes.
Also: Supabase Realtime's filter syntax is limited. Complex multi-table joins aren't supported in the subscription filter — you subscribe to a table and filter client-side if needed.
The Decision
We went with Supabase Realtime. The Postgres-native data model and granular subscriptions were worth more to us than Firebase's connection scalability at this stage. When we hit the connection ceiling, we'll shard by feed segment or move critical real-time paths to a dedicated WebSocket server.
The real answer to "which one scales" is: neither, at true scale. Both require architectural changes when you hit tens of thousands of concurrent connections. But for a social app at launch scale (under 5,000 DAU), Supabase Realtime with a hybrid polling strategy handles it cleanly without the data modeling headaches.