Handling "Too many reads" Error During Mass Deletion in Convex
Hello Convex team! 👋
I'm encountering a read limit error while implementing parent-child record deletion in our application. Would appreciate guidance on the best approach.
The Error
Current Implementation
Here's a simplified version of our deletion code:
Specific Questions
1. Read Limit Management
- What's the recommended approach to stay within the 4096 read limit for large deletions?
- Should the deletion be split across multiple mutation calls?
- Is there a way to monitor read count during execution?
2. Batch Processing
- What's the optimal batch size for avoiding read limits?
- Is sequential processing better than parallel (Promise.all) for related tables?
- How should we handle pagination for large-scale deletions?
3. Best Practices
- What's the recommended pattern for deleting deeply nested data structures?
- Should we implement a job system for large deletions?
- Are there specific indexing strategies for optimization?
Current Scale
- Average parent record has:
- 100-200 type A children
- 50-100 type B children
- 500-1000 associated records
- Each child might have 5-10 nested records
What We've Tried
1. Reduced batch size to 50
2. Processed tables sequentially instead of parallel
3. Used indexed queries
Would greatly appreciate guidance on:
1. Most efficient way to structure these deletions
2. Best practices for handling read limits
3. Recommended Convex features for this use case
Thank you! 🙏
6 Replies
Thanks for posting in <#1088161997662724167>.
Reminder: If you have a Convex Pro account, use the Convex Dashboard to file support tickets.
- Provide context: What are you trying to achieve, what is the end-user interaction, what are you seeing? (full error message, command output, etc.)
- Use search.convex.dev to search Docs, Stack, and Discord all at once.
- Additionally, you can post your questions in the Convex Community's <#1228095053885476985> channel to receive a response from AI.
- Avoid tagging staff unless specifically instructed.
Thank you!
You can avoid the limit by having a mutation recursively schedule itself to work through large batch deletions. Your current approach breaks things down by batch but is still all happening in a single transaction.
So something like:
await ctx.scheduler.runAfter(0, "deleteCompanyBatch", {});
?
That's how scheduling would generally look, yeah. And you'll want to use paginated queries and pass the cursor for the next page of deletions into the recursive delete mutation. So you may need multiple recursive delete mutations, all kicked off by the initial delete function.
If anyone ever has the same problem, here my simplified version:
https://labs.convex.dev/convex-ents have this built-in
alternatively https://github.com/get-convex/convex-helpers/tree/main/packages/convex-helpers#triggers can be used to simplify cascading deletes
Convex Ents - Convex Ents
Relations, default values, unique fields and more for Convex
GitHub
convex-helpers/packages/convex-helpers at main · get-convex/convex-...
A collection of useful code to complement the official packages. - get-convex/convex-helpers