makrdev
makrdev3w ago

makrdev's Thread

I created triggers to cascade delete operations for relational tables. (when user is deleted it cascades to relation tables.) It fails when we have too many documents. How can I handle this situation?
7 Replies
convex-threads
Reminder: If you have a Convex Pro account, use the Convex Dashboard to file support tickets. If this is a support-related question, please create a post in #support-community. If not, carry on! Thanks!
makrdev
makrdevOP3w ago
Solution:
// Cascade delete all files when a project is deleted
triggers.register("projects", async (ctx, change) => {
if (change.operation === "delete") {
// Get files with pagination
const { continueCursor, page } = await ctx.db
.query("files")
.withIndex("by_project_id", (q) => q.eq("projectId", change.id))
.paginate({ numItems: 1000, cursor: null });

// Delete files
await asyncMap(page, (file) => ctx.db.delete(file._id));

// Continue deleting files if there are more
if (continueCursor) {
await ctx.scheduler.runAfter(0, internal.helpers.batch.batchDeleteFiles, {
projectId: change.id,
cursor: continueCursor,
numItems: 1000,
});
}
}
});

export const batchDeleteFiles = internalMutation({
args: {
cursor: v.string(),
projectId: v.id("projects"),
numItems: v.number(),
},
handler: async (ctx, { projectId, cursor, numItems }) => {
// Get the files
const { page, continueCursor } = await ctx.db
.query("files")
.withIndex("by_project_id", (q) => q.eq("projectId", projectId))
.paginate({
numItems,
cursor,
});

// Delete files
await asyncMap(page, (file) => ctx.db.delete(file._id));

if (continueCursor) {
await ctx.scheduler.runAfter(0, internal.helpers.batch.batchDeleteFiles, {
projectId,
cursor: continueCursor,
numItems,
});
}
},
});
// Cascade delete all files when a project is deleted
triggers.register("projects", async (ctx, change) => {
if (change.operation === "delete") {
// Get files with pagination
const { continueCursor, page } = await ctx.db
.query("files")
.withIndex("by_project_id", (q) => q.eq("projectId", change.id))
.paginate({ numItems: 1000, cursor: null });

// Delete files
await asyncMap(page, (file) => ctx.db.delete(file._id));

// Continue deleting files if there are more
if (continueCursor) {
await ctx.scheduler.runAfter(0, internal.helpers.batch.batchDeleteFiles, {
projectId: change.id,
cursor: continueCursor,
numItems: 1000,
});
}
}
});

export const batchDeleteFiles = internalMutation({
args: {
cursor: v.string(),
projectId: v.id("projects"),
numItems: v.number(),
},
handler: async (ctx, { projectId, cursor, numItems }) => {
// Get the files
const { page, continueCursor } = await ctx.db
.query("files")
.withIndex("by_project_id", (q) => q.eq("projectId", projectId))
.paginate({
numItems,
cursor,
});

// Delete files
await asyncMap(page, (file) => ctx.db.delete(file._id));

if (continueCursor) {
await ctx.scheduler.runAfter(0, internal.helpers.batch.batchDeleteFiles, {
projectId,
cursor: continueCursor,
numItems,
});
}
},
});
lee
lee3w ago
Check out the convex ents documentation and source code, which describes how you can split cascading deletes across multiple mutations https://labs.convex.dev/convex-ents/schema/deletes
Cascading Deletes - Convex Ents
Relations, default values, unique fields and more for Convex
makrdev
makrdevOP3w ago
@lee Hey Lee, what do you think about the solution I shared above?
lee
lee3w ago
That should work, yep! The only issue would be if you want the deletions to appear transactional, which you can do with the pattern in convex ents But if you just want everything deleted eventually, your solution looks great
makrdev
makrdevOP3w ago
There was an error in one of the cascade triggers, and it prevented all of them deleting other resources. Is that what you mean by 'transactional'?
lee
lee3w ago
Oh you can probably fix that by doing the pagination entirely within batchDeleteFiles. The trigger should do runAfter(0, ...batchDeleteFiles, { projectId: change.id, cursor: null, numItems: 1000 })

Did you find this page helpful?