Paginated query bandwith
Hey 🙂 I have been using Convex for a few days now, and I am suprised to find that I have reached a bandwidth of 1.3 Gb, consuming more than 500Mb per day. The application I am testing is very simple:
I have a dictionary list of 10k+ words with detailed definitions, and I use a paginated query to list the words. Given that the words themselves don't change, I expected my query to hit the cache and thus no bandwidth usage at all. However clearly this isn't the case.
Are paginated queries cached? If they are, I have a hard time seing what's the problem.
If they are not cached, what can I do to optimise this case ?
Thank you for creating Convex, it's really a dream to work with!
8 Replies
Let us know if this thread help/doesn't help answer your question: https://discord.com/channels/1019350475847499849/1019350478817079338/1193747923595444224
As I am reading in the thread, paginated queries are suposed to be cached as well. In that case, I am perplexed as to what might be causing it, as my query is very simple and should hit the cache every time
The only thing I can imagine, is that because I used the unofficial Vue client, maybe some magic might be happening by the client, making the query have different arguments on each call. If that might be the case, I will try to replace my client with the javascript one offered by convex
Hmmm, actually my queries are cached, so now I am really perplexed as to what causes them to charge for bandwith

Very strange, as I am not managing to reproduce the high consumption I've had for the past few days. I refresh the page, the queries are indeed cached, and my dashboard doesn't show any increase in bandwith. So not sure what caused the spike in bandwith for the past two days (500 Mb / day) but I guess I'll have to keep an eye on it.
I think I can answer my own question : My table has 8.000 words x 1kb (the minimum amount calculated per document) --> 8Mb. Since my query lists all words in the table it's cost is minimum 8Mb, but in practice I've noticed the recorded cost is double that, at 16Mb. Since I am actively working on developping the app, everytime I make small changes to the schema, the cache is invalidated. I only have to invalidate the cache 60 times to reach a bandwidth of 1Gb (1000/16 = 62), so I guess that explains it 🙂
just for a note, we're reconsidering the rounding, as it ends up causing more stress than its worth. we'll follow up
So, followup after a day, my initial analysis doesn't seem to be correct. I went away for a day and I came back to 500mb of bandwidth still being consumed. Only possible user is my computer, that was on sleep mode, so I imagine that it woke up and refreshed the page every 15-30 minutes. I counted the queries in the log, 50% of the queries hit the cache, and another 50% didn't. I was expecting 100% cache hit rate, since nothing changed in my data or in the functions reading the data. It's meant to be data that stay the same, and a repetitive query over them, so it really should be just cache. What am I missing? This morning it didn't hit the cache for hours on end, for example.

Suggestions :
- It would be nice to have more fine vision over the bandwidth charges over the day. Currently it only shows the total by day, but it would be nice to have a line chart showing bandwith by the minute, as there is for functions invocations.
- It would be nice to know the variable that caused a query to recalculate, in the logs
So the cache can get cleared for various reasons. These reasons can be internal to Convex, how your app is built, or how much data is in the cache. I don't think it's possible to be rely on it perfectly to manage limits super precisely.
We do plan on providing more control in our paid plans eventually.