Kenni
Kenni2mo ago

Handling "Too many reads" Error During Mass Deletion in Convex

Hello Convex team! 👋 I'm encountering a read limit error while implementing parent-child record deletion in our application. Would appreciate guidance on the best approach. The Error
Failed to delete parent: ConvexError: [CONVEX M(parent:deleteParent)]
[Request ID: 4d1ba8b070820b62] Server Error
Uncaught ConvexError: Too many reads in a single function execution (limit: 4096).
Consider using smaller limits in your queries, paginating your queries, or using indexed queries with a selective index range expressions.
Failed to delete parent: ConvexError: [CONVEX M(parent:deleteParent)]
[Request ID: 4d1ba8b070820b62] Server Error
Uncaught ConvexError: Too many reads in a single function execution (limit: 4096).
Consider using smaller limits in your queries, paginating your queries, or using indexed queries with a selective index range expressions.
Current Implementation Here's a simplified version of our deletion code:
export const deleteParent = mutationWithRLS({
args: { parentId: v.id("parents") },
handler: async (ctx, { parentId }) => {
try {
await Promise.all([
// Delete first level children
deleteInBatches({
ctx,
tableName: "childrenA",
indexName: "by_parent",
fieldName: "parentId",
fieldValue: parentId
}),
// Delete second level children
deleteInBatches({
ctx,
tableName: "childrenB",
indexName: "by_parent",
fieldName: "parentId",
fieldValue: parentId
}),
// Delete associated records
deleteInBatches({
ctx,
tableName: "associatedRecords",
indexName: "by_parent",
fieldName: "parentId",
fieldValue: parentId
}),
]);

await ctx.db.delete(parentId);
return { success: true };
} catch (error) {
throw new ConvexError(`Failed to delete parent: ${error.message}`);
}
},
});

// Our batch deletion implementation
async function deleteInBatches({
ctx,
tableName,
indexName,
fieldName,
fieldValue,
}) {
const BATCH_SIZE = 100;
let hasMore = true;

while (hasMore) {
const batch = await ctx.db
.query(tableName)
.withIndex(indexName, (q) => q.eq(fieldName, fieldValue))
.take(BATCH_SIZE);

if (batch.length === 0) break;

for (const item of batch) {
await ctx.db.delete(item._id);
}

hasMore = batch.length === BATCH_SIZE;
}
}
export const deleteParent = mutationWithRLS({
args: { parentId: v.id("parents") },
handler: async (ctx, { parentId }) => {
try {
await Promise.all([
// Delete first level children
deleteInBatches({
ctx,
tableName: "childrenA",
indexName: "by_parent",
fieldName: "parentId",
fieldValue: parentId
}),
// Delete second level children
deleteInBatches({
ctx,
tableName: "childrenB",
indexName: "by_parent",
fieldName: "parentId",
fieldValue: parentId
}),
// Delete associated records
deleteInBatches({
ctx,
tableName: "associatedRecords",
indexName: "by_parent",
fieldName: "parentId",
fieldValue: parentId
}),
]);

await ctx.db.delete(parentId);
return { success: true };
} catch (error) {
throw new ConvexError(`Failed to delete parent: ${error.message}`);
}
},
});

// Our batch deletion implementation
async function deleteInBatches({
ctx,
tableName,
indexName,
fieldName,
fieldValue,
}) {
const BATCH_SIZE = 100;
let hasMore = true;

while (hasMore) {
const batch = await ctx.db
.query(tableName)
.withIndex(indexName, (q) => q.eq(fieldName, fieldValue))
.take(BATCH_SIZE);

if (batch.length === 0) break;

for (const item of batch) {
await ctx.db.delete(item._id);
}

hasMore = batch.length === BATCH_SIZE;
}
}
Specific Questions 1. Read Limit Management - What's the recommended approach to stay within the 4096 read limit for large deletions? - Should the deletion be split across multiple mutation calls? - Is there a way to monitor read count during execution? 2. Batch Processing - What's the optimal batch size for avoiding read limits? - Is sequential processing better than parallel (Promise.all) for related tables? - How should we handle pagination for large-scale deletions? 3. Best Practices - What's the recommended pattern for deleting deeply nested data structures? - Should we implement a job system for large deletions? - Are there specific indexing strategies for optimization? Current Scale - Average parent record has: - 100-200 type A children - 50-100 type B children - 500-1000 associated records - Each child might have 5-10 nested records What We've Tried 1. Reduced batch size to 50 2. Processed tables sequentially instead of parallel 3. Used indexed queries Would greatly appreciate guidance on: 1. Most efficient way to structure these deletions 2. Best practices for handling read limits 3. Recommended Convex features for this use case Thank you! 🙏
6 Replies
Convex Bot
Convex Bot2mo ago
Thanks for posting in <#1088161997662724167>. Reminder: If you have a Convex Pro account, use the Convex Dashboard to file support tickets. - Provide context: What are you trying to achieve, what is the end-user interaction, what are you seeing? (full error message, command output, etc.) - Use search.convex.dev to search Docs, Stack, and Discord all at once. - Additionally, you can post your questions in the Convex Community's <#1228095053885476985> channel to receive a response from AI. - Avoid tagging staff unless specifically instructed. Thank you!
erquhart
erquhart2mo ago
You can avoid the limit by having a mutation recursively schedule itself to work through large batch deletions. Your current approach breaks things down by batch but is still all happening in a single transaction.
Kenni
KenniOP2mo ago
So something like: await ctx.scheduler.runAfter(0, "deleteCompanyBatch", {}); ?
erquhart
erquhart2mo ago
That's how scheduling would generally look, yeah. And you'll want to use paginated queries and pass the cursor for the next page of deletions into the recursive delete mutation. So you may need multiple recursive delete mutations, all kicked off by the initial delete function.
Kenni
KenniOP2mo ago
If anyone ever has the same problem, here my simplified version:
// Main deletion function that initiates the process
export const deleteResource = mutation({
args: { resourceId: v.id("resources") },
handler: async (ctx, { resourceId }) => {
const userId = await ctx.auth.getUserId();
if (!userId) throw new Error("Authentication required");

// Verify permissions
const resource = await ctx.db.get(resourceId);
if (!resource) throw new Error("Resource not found");
if (!await hasPermission(ctx, userId, resourceId, "delete")) {
throw new Error("Not authorized");
}

// Start batch deletion process
await ctx.scheduler.runAfter(0, internal.batchDelete, {
resourceId,
userId,
cursor: null,
phase: "items"
});

return { success: true };
}
});

// Internal batch deletion handler
const batchDelete = internalMutation({
args: {
resourceId: v.id("resources"),
userId: v.string(),
cursor: v.optional(v.string()),
phase: v.union(v.literal("items"), v.literal("metadata"), v.literal("cleanup"))
},
handler: async (ctx, { resourceId, userId, cursor, phase }) => {
const BATCH_SIZE = 100;

try {
switch (phase) {
case "items": {
const items = await getBatch(ctx, "items", resourceId, BATCH_SIZE, cursor);
await deleteItems(ctx, items.page);

if (items.isDone) {
await nextPhase(ctx, resourceId, userId, "metadata");
} else {
await continueBatch(ctx, resourceId, userId, items.cursor, "items");
}
break;
}

case "metadata": {
const metadata = await getBatch(ctx, "metadata", resourceId, BATCH_SIZE, cursor);
await deleteMetadata(ctx, metadata.page);

if (metadata.isDone) {
await nextPhase(ctx, resourceId, userId, "cleanup");
} else {
await continueBatch(ctx, resourceId, userId, metadata.cursor, "metadata");
}
break;
}

case "cleanup": {
await ctx.db.delete(resourceId);
break;
}
}
} catch (error) {
await logError(ctx, resourceId, `Failed during ${phase}: ${error.message}`);
throw error;
}
}
});

// Helper functions
async function getBatch(ctx, table, resourceId, size, cursor) {
return await ctx.db
.query(table)
.withIndex("by_resource", q => q.eq("resourceId", resourceId))
.paginate({ numItems: size, cursor: cursor ?? null });
}

async function nextPhase(ctx, resourceId, userId, nextPhase) {
await ctx.scheduler.runAfter(0, internal.batchDelete, {
resourceId,
userId,
cursor: null,
phase: nextPhase
});
}

async function continueBatch(ctx, resourceId, userId, cursor, currentPhase) {
await ctx.scheduler.runAfter(0, internal.batchDelete, {
resourceId,
userId,
cursor,
phase: currentPhase
});
}
// Main deletion function that initiates the process
export const deleteResource = mutation({
args: { resourceId: v.id("resources") },
handler: async (ctx, { resourceId }) => {
const userId = await ctx.auth.getUserId();
if (!userId) throw new Error("Authentication required");

// Verify permissions
const resource = await ctx.db.get(resourceId);
if (!resource) throw new Error("Resource not found");
if (!await hasPermission(ctx, userId, resourceId, "delete")) {
throw new Error("Not authorized");
}

// Start batch deletion process
await ctx.scheduler.runAfter(0, internal.batchDelete, {
resourceId,
userId,
cursor: null,
phase: "items"
});

return { success: true };
}
});

// Internal batch deletion handler
const batchDelete = internalMutation({
args: {
resourceId: v.id("resources"),
userId: v.string(),
cursor: v.optional(v.string()),
phase: v.union(v.literal("items"), v.literal("metadata"), v.literal("cleanup"))
},
handler: async (ctx, { resourceId, userId, cursor, phase }) => {
const BATCH_SIZE = 100;

try {
switch (phase) {
case "items": {
const items = await getBatch(ctx, "items", resourceId, BATCH_SIZE, cursor);
await deleteItems(ctx, items.page);

if (items.isDone) {
await nextPhase(ctx, resourceId, userId, "metadata");
} else {
await continueBatch(ctx, resourceId, userId, items.cursor, "items");
}
break;
}

case "metadata": {
const metadata = await getBatch(ctx, "metadata", resourceId, BATCH_SIZE, cursor);
await deleteMetadata(ctx, metadata.page);

if (metadata.isDone) {
await nextPhase(ctx, resourceId, userId, "cleanup");
} else {
await continueBatch(ctx, resourceId, userId, metadata.cursor, "metadata");
}
break;
}

case "cleanup": {
await ctx.db.delete(resourceId);
break;
}
}
} catch (error) {
await logError(ctx, resourceId, `Failed during ${phase}: ${error.message}`);
throw error;
}
}
});

// Helper functions
async function getBatch(ctx, table, resourceId, size, cursor) {
return await ctx.db
.query(table)
.withIndex("by_resource", q => q.eq("resourceId", resourceId))
.paginate({ numItems: size, cursor: cursor ?? null });
}

async function nextPhase(ctx, resourceId, userId, nextPhase) {
await ctx.scheduler.runAfter(0, internal.batchDelete, {
resourceId,
userId,
cursor: null,
phase: nextPhase
});
}

async function continueBatch(ctx, resourceId, userId, cursor, currentPhase) {
await ctx.scheduler.runAfter(0, internal.batchDelete, {
resourceId,
userId,
cursor,
phase: currentPhase
});
}
Michal Srb
Michal Srb5w ago
Convex Ents - Convex Ents
Relations, default values, unique fields and more for Convex
GitHub
convex-helpers/packages/convex-helpers at main · get-convex/convex-...
A collection of useful code to complement the official packages. - get-convex/convex-helpers

Did you find this page helpful?