Gil PennerG
Convex Community4mo ago
3 replies
Gil Penner

Storing big payloads without killing search speed

Advice
Hey! 👋

I’m not super experienced with backend and specially DB stuff, but Convex makes it easy enough that I’m diving in anyway, so thanks for that.

I’ve got a question about handling a lot of data while still keeping search reasonably fast.

Here’s the situation:
I’m using an API that returns a structured JSON with around 200 lines per item. Since it’s a paid API, I want to store everything. Right now I only need about a third of the fields, but if I ever decide to support more later on, I don’t want to pay again to fetch the same data.

I’m thinking about two approaches:

1) Store the full payload in the database.
It’s just an MVP right now, but I still want decent search speed. Each tenant could easily have an average of 15k records. The only way I see this working is creating around 10 indexes, but that feels like it might get expensive in terms of storage.

2) Only store the fields needed for the table.
That’s maybe 10 columns. I’d still need around 10 indexes for fast queries. The rest of the raw payload could be dumped in storage.

So what’s the better approach?
Is there a different angle I’m missing?
Any helpers I could use?

TIA
Was this page helpful?