Dedicated Postgres + pgvector on NVMe. Fixed monthly pricing, predictable performance, and real migration help from Supabase, Neon, Pinecone, or self-hosted. No auth service, no edge functions, no surprise bill.
2,000 QPS•<4ms p50•from $35/node
Tell us about your setup. If Rivestack isn't a better fit than what you have today, we'll tell you.
postgresql://appuser@db-7q9m2p.eu.rivestack.io/primary Powering ask.rivestack.io — semantic search over 30 days of Hacker News
Vector workloads don't fit neatly into general-purpose Postgres pricing. Most teams we talk to are losing money, losing sleep, or both.
Compute add-ons, storage, egress, and vector index memory keep adding up. The pgvector workload you run for one feature is now your biggest Supabase line item.
Serverless Postgres is elegant until the first query of the day takes 2 seconds. Usage-based pricing feels cheap until it isn't.
You're paying for a dedicated vector database while your app data lives in Postgres. One more service, one more bill, one more copy of the data to keep in sync.
You chose a VPS to save money. Now you maintain Patroni, backups, PITR, upgrades, and HNSW tuning at 2am instead of shipping.
You're on Supabase, Neon, Pinecone, or a self-hosted VPS, and pgvector is hurting — cost, latency, ops, or all three. Tell us what you're running and what hurts. We'll tell you if Rivestack fixes it, what the migration looks like, and what it will actually cost.
Rivestack isn't a broader platform. It's a narrower one — on purpose. We do managed Postgres + pgvector, and we do it well.
Fixed per-node pricing. No surprise invoices from query counts, egress, or vector-row overage. You pick the node size, you know the bill.
Dedicated NVMe, tuned HNSW indexes, no noisy neighbours. 2,000+ QPS at sub-4ms p50 on a $35/month node — measured, not marketed.
Patroni HA, pgBackRest backups, 14-day PITR, monitoring, and SSL handled for you. No auth service, no edge functions, no magic you didn't ask for.
We look at your current setup and tell you exactly how to move: pg_dump, logical replication, or cutover plan. Included with your workload review — not a paid add-on.
Same dedicated compute. Fraction of the cost. None of the platform bloat.
| Rivestack | Supabase | Neon | |
|---|---|---|---|
| Monthly cost | $35 | ~$105 | ~$69+ |
| Compute | 2 vCPU · 4 GB | Shared · 1 GB* | Serverless |
| Storage | NVMe | gp3 SSD | Cloud SSD |
| pgvector | Tuned (HNSW) | Extension only | Extension only |
| Backups | Daily + 14d PITR | Daily | Yes |
| Vector perf | 2,150 QPS · 2.8ms | ~410 QPS · 18ms | Cloud SSD limited |
| Terraform | ✓ | ✓ | ✗ |
Supabase and Neon are excellent platforms with broader feature sets — auth, storage, edge functions, realtime. If you need those, use them. Rivestack is the focused choice for teams who want managed Postgres + pgvector without the platform tax.
Measured on the $35/month plan. No synthetic loads, no cherry-picked configs.
Throughput
p50 Latency
Recall
OpenAI text-embedding-3-small (1536 dimensions). Full benchmark methodology and reproduction scripts on our docs.
From "maybe this could be cheaper" to running on Rivestack — usually in a week.
Current provider, row count, vector dimensions, QPS/latency targets, and what's hurting. One email, no call required.
We reply with a plan recommendation, expected performance, and a migration path from Supabase, Neon, Pinecone, or self-hosted. Or a clear "not a fit" if that's the honest answer.
Standard Postgres means standard tools: pg_dump, pg_restore, or logical replication. We help with cutover. Most teams are on Rivestack the same week.
ask.rivestack.io runs on a real Rivestack cluster. Every query hits pgvector on NVMe.
It's just PostgreSQL. Use your existing driver.
package main
import (
"context"
"fmt"
"github.com/jackc/pgx/v5"
)
func main() {
ctx := context.Background()
conn, _ := pgx.Connect(ctx, "postgresql://appuser@db-7q9m2p.eu.rivestack.io/primary")
defer conn.Close(ctx)
// pgvector is already enabled — just create your table
conn.Exec(ctx, `CREATE TABLE docs (
id bigserial PRIMARY KEY,
text text NOT NULL,
emb vector(1536)
)`)
// Insert a document with its embedding
conn.Exec(ctx, `INSERT INTO docs (text, emb)
VALUES ($1, $2::vector)`, "deploy LLMs in production", embedding)
// Nearest-neighbor search in < 4ms
rows, _ := conn.Query(ctx, `SELECT text, emb <=> $1::vector AS dist
FROM docs ORDER BY dist LIMIT 5`, query)
for rows.Next() {
var text string
var dist float64
rows.Scan(&text, &dist)
fmt.Printf("%.4f %s\n", dist, text)
}
}Manage Rivestack like the rest of your infra. Create, scale, and destroy clusters with one file and one command.
Built in France. Infrastructure in EU and US-East.
Write plain English, get working pgvector SQL. No syntax to memorize.
One price per node. No per-query billing, no egress surcharges, no vector-count overage. The bill on day 30 is the same as the quote on day 1.
Production-ready dedicated PostgreSQL with NVMe storage, automated backups, and monitoring. Add nodes for HA.
High-performance dedicated PostgreSQL for demanding workloads. Add nodes for automatic failover.
What buyers actually ask before moving a pgvector workload.
For pgvector-heavy workloads on dedicated compute, yes — almost always. A 2 vCPU / 4 GB Supabase compute add-on runs ~$105/month. The equivalent dedicated node on Rivestack is $35/month, on NVMe instead of gp3. Send us your current plan and row counts and we'll give you a realistic number, not a marketing one. If Rivestack is not cheaper for your workload, we'll say so.
Rivestack runs dedicated Postgres on always-on NVMe nodes. No scale-to-zero, no cold starts, no cache warm-up before your first query of the day. If p95 latency and cost unpredictability are what's hurting you on Neon, that's exactly the problem we solve.
It's a real migration. pgvector with HNSW, tuned correctly on NVMe, handles the vast majority of Pinecone workloads at a fraction of the cost — and you get SQL joins, filters, and transactions in the same database as your app data. Submit your workload shape (rows, dimensions, QPS, filters) and we'll tell you whether it fits. If you actually need a specialized vector DB at your scale, we'll tell you that too.
If you enjoy running Patroni, pgBackRest, PITR, HNSW tuning, upgrades, and failover testing — don't switch, keep your VPS. Rivestack is for teams who were doing that and decided the ops burden was no longer worth the $30/month savings. You get the same raw Postgres, just without the pager.
You send us what you have: current provider, row counts, vector dimensions, QPS/latency targets, and what's hurting (cost, latency, ops). We respond with a specific plan recommendation, an expected performance estimate, and a honest yes/no on whether Rivestack is a better fit than what you have today. No call required. No sales pressure. If you're not a fit, we'll say so in the same reply.
It's standard PostgreSQL, so yes. For most databases, a pg_dump / pg_restore takes minutes. For larger or always-on workloads, we help with logical replication cutover. Migration help is included in the workload review — we don't charge extra for it.
Rivestack is a French company and our EU region runs entirely within the European Union. Your data never leaves EU territory unless you explicitly choose our US-East region. DPAs are available on request. If Europe-friendly hosting is a hard requirement for your team, this is handled by default.
We don't have SOC2 yet. If you need enterprise compliance certifications today, Supabase or AWS RDS are the right choice. Rivestack is focused on startups, agencies, indie hackers, and small teams who need real pgvector performance at predictable cost.
Fixed monthly per node. No per-query billing, no egress surcharges, no vector-count overage. You pick the plan, you know the bill. If you need more capacity, you resize the node — you don't open an invoice and find a 4× spike.
Send us your current setup. In 48 hours you'll have a real answer on whether Rivestack is cheaper, faster, and less painful to run than what you have today.
Tell us about your setup. If Rivestack isn't a better fit than what you have today, we'll tell you in the same reply.