noob saibot
noob saibot4mo ago

Too many reads in a single function execution (Mutation)

I'm getting this error in a mutation while attempting to delete records in a loop with below code:
await Promise.all(data.map(async obj => {
await ctx.db.delete(obj._id);
}));
await Promise.all(data.map(async obj => {
await ctx.db.delete(obj._id);
}));
The logs indicate that this error is thrown at the delete line. the array "data" contains exactly 2900 items. I thought that this limitation error occurs only when we're retrieving data. So I'd just would like to understand what this error means in the context of a (delete) mutation. How do indexes work in this case to limit the number of reads?
6 Replies
Convex Bot
Convex Bot4mo ago
Thanks for posting in <#1088161997662724167>. Reminder: If you have a Convex Pro account, use the Convex Dashboard to file support tickets. - Provide context: What are you trying to achieve, what is the end-user interaction, what are you seeing? (full error message, command output, etc.) - Use search.convex.dev to search Docs, Stack, and Discord all at once. - Additionally, you can post your questions in the Convex Community's <#1228095053885476985> channel to receive a response from AI. - Avoid tagging staff unless specifically instructed. Thank you!
erquhart
erquhart4mo ago
Relevant: https://discord.com/channels/1019350475847499849/1371578598707953756 There's a 4096 limit on reads per function. Each of those 2,900 docs represents a read - each delete also requires a read, which puts you at 5,800 reads. As I said in the other post, touching thousands of docs in a single Convex function is an anti-pattern. Keep 'em light. I do 500 deletes max per function run personally, sometimes less if they're heavy.
noob saibot
noob saibotOP4mo ago
Hi @erquhart thanks for the reply. I am aware of the convex limit. I'm looking for ways to write my code in a way that I'll not hit this limit. In this particular scenario, the deletion is not triggered by the user from UI (but from a backend job). So this is not something I can do "personally". The first attempt I did was to write a "while loop" and in this loop, I would do a pagination read of a batch of records (e.g 1000) until the loop expires (until the pagination does not return another next cursor). I thought by calling another function, I would escape the limit. Another user has suggested to try to recursively call scheduled functions, I'll try that solution. I also using indexes in all my tables. So far it is ok. But what will happen when there are more data even from an index?
erquhart
erquhart4mo ago
You're correct that calling another function escapes the limit, eg., an action can call many mutations, each with their own limit. Recursively scheduled functions are the common pattern here. I'm not fully understanding the "more data even from an index" part as pagination allows you to traverse arbitrarily large numbers of records. If you pass the pagination options in the recursive scheduling, you can theoretically handle any amount of records. So you would control the ceiling on per-function limits via the number of items per page.
noob saibot
noob saibotOP4mo ago
OK, I'll try to play more with pagination and scheduled functions for these extreme scenarios. (For the index, I meant, when I make a query with "withIndex()", the amount of records the server scans is limited or reduced. But what would happen if this amount grows over time?. For example if I have the table "Sales" and an index "by_user", with the user John Doe.
await ctx.db.query("sales).withIndex("by_user", *john Doe*)...
await ctx.db.query("sales).withIndex("by_user", *john Doe*)...
What would happen when John Doe has more than 4096 sales?
erquhart
erquhart4mo ago
If you're using .paginate() for a recursive mutation, and numItems is limited to say 500, the higher total number of records just means more recursive mutations to paginate through them all. Each paginate call does it's own separate scanning, the whole result set won't be scanned up front.

Did you find this page helpful?