Feedback on limits documentation, invisible "limits"

My app recently ran into the
Too many reads in a single function execution (limit: 4096).
Too many reads in a single function execution (limit: 4096).
error on a mass-deletion function . I fixed it by just using .take(2048) and re-scheduling the function. I had to use 2048 instead of 4096 because I found out that the .delete(record._id) calls were also counting towards the "read" limit. Looking at the limits page, the only reference to "4096" I could find is on the "vector search" and on "log streaming", both of which seem unrelated to this "read" limit I was hitting on my function. Also, I was experiencing this error on another function:
Function myfunction.js:list return value invalid: Object has too many fields (1261 > maximum number 1024)
Function myfunction.js:list return value invalid: Object has too many fields (1261 > maximum number 1024)
On the limits page, the only references to 1024 I could find were on the "full text search" section and on the "documents" section, which states that key limit applies to documents and nested objects. This one seems fair enough, but I got confused because I don't have a document or any nested objects within it that are going over the limit. Rather, I built an object out of a query array result and returned it, and since the array had more than 1024 elements it failed. So I guess it would be nice to have the documentation explicitly mention the return value object restriction.
4 Replies
Convex Bot
Convex Bot4w ago
Thanks for posting in <#1088161997662724167>. Reminder: If you have a Convex Pro account, use the Convex Dashboard to file support tickets. - Provide context: What are you trying to achieve, what is the end-user interaction, what are you seeing? (full error message, command output, etc.) - Use search.convex.dev to search Docs, Stack, and Discord all at once. - Additionally, you can post your questions in the Convex Community's <#1228095053885476985> channel to receive a response from AI. - Avoid tagging staff unless specifically instructed. Thank you!
erquhart
erquhart4w ago
The limit you're hitting, I believe, is due to ctx.db.get/query being called 4096 times. It's not on the limits page, but it is documented here: https://docs.convex.dev/functions/error-handling/#readwrite-limit-errors
Error Handling | Convex Developer Hub
There are four reasons why your Convex queries
Giovani Granzotto
I'm not sure about that. The code only calls .query once and only has to loop for the delete method. If the problem were the number of .query calls I imagine I could change the BATCH_SIZE number up into the 32K scanned document limit with no issues, right?
No description
erquhart
erquhart3w ago
Yeah the language feels a bit abstract there. Based on what you're seeing, it seems the difference between scanned and read is the tripping point. It only applies when an index isn't used, in which case you could scan an arbitrary number of documents to read a single document, eg., when using a filter. You're using an index, so it'll read every document it scans for that query. So 2040 reads + 2040 writes for the deletes puts you right inside the 4096 limit. Keep in mind individual functions should be relatively light, so reading and deleting 2k documents in one go is on the heavy side. If you do less per function (I typically limit batch deletes to 500 per run, or even less for large documents), you shouldn't notice a performance hit. Eg., 4 functions deleting 500 per run and 1 function deleting 2000 should take about the same amount of time. But the former is much less likely to hit a size limit.

Did you find this page helpful?