holden
holden•16mo ago

Hi I m trying to create a Delete user

Hi! I'm trying to create a "Delete user" internalMutation I can run from the dashboard by passing in a userId to have it delete docs across multiple tables owned by that user. I tried using the migration helper and calling it on multiple tables from within my mutation, but get this error:
Uncaught Error: This query or mutation function ran multiple paginated queries. Convex only supports a single paginated query in each function.
at async <anonymous> (../../convex/helpers/migrations.ts:78:20)
at async <anonymous> (../../convex/helpers/deleteUser.ts:38:4)
Uncaught Error: This query or mutation function ran multiple paginated queries. Convex only supports a single paginated query in each function.
at async <anonymous> (../../convex/helpers/migrations.ts:78:20)
at async <anonymous> (../../convex/helpers/deleteUser.ts:38:4)
Is there a different recommended way to do a migration affecting multiple tables?
12 Replies
lee
lee•16mo ago
hi! we're working on this behavior so what you attempted might soon be possible. in the meantime i would recommend deleting from each table in separate mutations, and using scheduler.runAfter(0, ...) to kick off these sub-mutations from a single mutation
lee
lee•16mo ago
GitHub
ai-town/convex/testing.ts at 60433ec3b8dc25bd4469e9f387670c37409f80...
A MIT-licensed, deployable starter kit for building and customizing your own version of AI town - a virtual town where AI characters live, chat and socialize. - a16z-infra/ai-town
holden
holdenOP•16mo ago
Thanks for the pointer, makes sense. But it looks like there's no way to pass an arg (like userId) to the migration helper? I was using a closure to do this (code below). I guess I could modify that helper to accept args or write my own pagination logic in the mutations?
/**
* Delete a user, and all objects they own.
*/
export const deleteUser = internalMutation(
async (ctx, { email, dryRun }: { email: string; dryRun: boolean }) => {
// Lookup a user by email (throws if not unique)
const user = await ctx.db
.query("users")
.withIndex("by_email", (q) => q.eq("email", email))
.unique();

if (!user) {
console.log(`User not found: ${email}`);
return;
}

/**
* Delete all docs in a given table owned by this user.
*/
const deleteUserDocs = (table: "datasets" | "preferences" | "projects" | "themes") =>
migration({
table,
migrateDoc: async ({ db }, doc) => {
if (doc.owner === user._id) {
console.log(`Deleting from ${table}: ${doc._id}`);
await db.delete(doc._id);
}
},
});

await deleteUserDocs("datasets")(ctx, { dryRun });
await deleteUserDocs("preferences")(ctx, { dryRun });
await deleteUserDocs("projects")(ctx, { dryRun });
await deleteUserDocs("themes")(ctx, { dryRun });

console.log("Deleting user");
await ctx.db.delete(user._id);

if (dryRun) {
throw new Error(`Dry Run: exiting`);
}
}
);
/**
* Delete a user, and all objects they own.
*/
export const deleteUser = internalMutation(
async (ctx, { email, dryRun }: { email: string; dryRun: boolean }) => {
// Lookup a user by email (throws if not unique)
const user = await ctx.db
.query("users")
.withIndex("by_email", (q) => q.eq("email", email))
.unique();

if (!user) {
console.log(`User not found: ${email}`);
return;
}

/**
* Delete all docs in a given table owned by this user.
*/
const deleteUserDocs = (table: "datasets" | "preferences" | "projects" | "themes") =>
migration({
table,
migrateDoc: async ({ db }, doc) => {
if (doc.owner === user._id) {
console.log(`Deleting from ${table}: ${doc._id}`);
await db.delete(doc._id);
}
},
});

await deleteUserDocs("datasets")(ctx, { dryRun });
await deleteUserDocs("preferences")(ctx, { dryRun });
await deleteUserDocs("projects")(ctx, { dryRun });
await deleteUserDocs("themes")(ctx, { dryRun });

console.log("Deleting user");
await ctx.db.delete(user._id);

if (dryRun) {
throw new Error(`Dry Run: exiting`);
}
}
);
ian
ian•16mo ago
I would write copy or extend the helper. You should also note that just calling the migration helper directly will only run it for one batch. You could modify it to schedule itself for the next batch recursively, or use the runMutation action in the migration helper. An example of the recursive approach is here: https://github.com/a16z-infra/ai-town/blob/660b75ae494ef7e03fe92f1fd595abc24bcaa74a/convex/crons.ts#L68 and called from here: https://github.com/a16z-infra/ai-town/blob/660b75ae494ef7e03fe92f1fd595abc24bcaa74a/convex/testing.ts#L236
GitHub
ai-town/convex/testing.ts at 660b75ae494ef7e03fe92f1fd595abc24bcaa7...
A MIT-licensed, deployable starter kit for building and customizing your own version of AI town - a virtual town where AI characters live, chat and socialize. - a16z-infra/ai-town
GitHub
ai-town/convex/crons.ts at 660b75ae494ef7e03fe92f1fd595abc24bcaa74a...
A MIT-licensed, deployable starter kit for building and customizing your own version of AI town - a virtual town where AI characters live, chat and socialize. - a16z-infra/ai-town
holden
holdenOP•16mo ago
Ok thanks for the pointers, will try that! Running a migration that affects multiple tables seems like a common use case, so anything you have planned (either in the platform or a helper) to make that easier would likely be very useful! Overall liking Convex - ~80% of the time, I feel like it's nicer than using SQL. But there are these 20% cases when I feel like something is easy with SQL and hard with Convex. Hopefully that % goes down over time as the platform matures 🙂
ian
ian•16mo ago
Can I ask a bit more about what you mean w.r.t. multiple tables? - If you are iterating over one table, and for each document you might be updating other tables, that should already be possible with the migration helpers. - If you want to run a few migrations at the same time, each iterating over a different table, that's also possible - the above example kicks off a mutation for each table in parallel to delete all items in batches. Is there a usecase I'm missing where you want to fetch the next N documents from table A and the next M documents from table B and do something with both in the same mutation? Or is it annoying to have to run multiple migrations to iterate over multiple tables, and writing code like in ai-town feels too manual? And please keep letting us know about the 20% that we're missing. Some things we already may know about, but it's always good to hear what the roughest edges are
holden
holdenOP•16mo ago
Sure! What I want here is just to create an internal "admin" action that deletes all records across 4 tables where ownerId = {value}, and then deletes the user record. I imagine I'll have other admin actions like this over time that may modify multiple tables, ideally in one transaction so it either succeeds or fails. It looks like it's actually pretty easy to do in a single mutation if I ignore pagination, so maybe I'll just start with that and not optimize any further until it causes a problem (fine if the action is slow, it's just for me). I think what feels "simple" to me here is being able to define one function that does some action (like "delete a user"). Once I have to start breaking things up into multiple mutations and thinking about batches or scheduling, I feel like anything is doable but I've fallen out of the "pit of success" 🙂 The main value prop (for me) of a service like Convex or Firebase is that I can spend as much of my time as possible focusing on my UX/frontend, and have the backend "just work" for me. It's when I feel like I'm getting sucked into more complex backend-y work that I occasionally miss SQL (or Firebase, just from more past experience with it). Since you asked, the other thing I ran into where I really felt this was when I had a slider sending mutations too fast (which made my UI laggy, and ran up my usage a lot even with just me testing). I looked into throttling/debouncing, optimistic updates, single flighting, etc but it was tricky to figure out. I ended up duplicating local state in zustand and throttling updates to Convex, but duplicating the state felt error prone and I don't feel confident I did it correctly. What I wanted there was something like react-query/useSWR/Replicache/ApolloClient, where someone smarter than me writes that local vs remote state logic, so I can have my UI write to "local" state and somehow not flood the backend with too many requests. A more positive example is I loved the usePaginatedQuery hook! Pagination is always annoying, and it was GREAT how simple that was to do with Convex in React! Another example like this that FE devs like me often want but is hard to build yourself is undo/redo. Maybe outside the scope of Convex, but when Liveblocks had hooks that made that trivially easy to implement, I was thrilled! I'll keep sharing feedback, thanks for listening!
ian
ian•16mo ago
Gotcha - that makes sense. And yes, this is a great use-case for having multiple paginated queries at once, which we're looking at supporting. Tactically, I agree with your assessment. It depends how many documents you expect each user to have, but if it's only on the order of ~hundreds, then you could just query them all and delete them all at once in one mutation. That would satisfy your transaction goal and be simpler. You could even query them with .take(1000) to limit it to 1k entries, if you're worried about that. If there are more, it could kick off a follow-up mutation that paginates over the rest of the items (and that mutation could be generic to just pass in a table and a userId, so you could kick off one for each table). I agree we should have a good answer for making this simple. One constraint we're bumping up against is our limits on how much a transaction should be able to do. It's useful in SQL that you can do very large, slow things in big statements, but also exposes you to issues like latency, read/write locking, etc (which admittedly often only show up later at scale). When transactions get big enough, they'd ideally happen either in batches, or in some environment isolated from live traffic. I think the goal is that the vast majority of work should be of a size that fits in a transaction, and the need for batching is the exception. Mass-updates (insert, replace, delete) are a known rough edge we need to think more about. Ingress tools, and migration management are specific strategies we might pursue to handle those in a more targeted way to make the "default" usecases easy.
ian
ian•16mo ago
I'm assuming you ran across my (almost a year old) article https://stack.convex.dev/throttling-requests-by-single-flighting trying to making single-flighting easy with a hook, but I agree the ergonomics weren't the best. And ideally this would be in a library, not copy-pasted code. It also didn't do optimistic updates automatically, which might be where you'd still be wanting to duplicate state locally. If you have an API for modifying useQuery or useMutation with options like your favorite of useSWR/react-query/etc, we could start another #support-community thread with it as a feature request and see what others think.
Throttling Requests by Single-Flighting
For write-heavy applications, use single flighting to dynamically throttle requests. See how we implement this with React hooks for Convex.
ian
ian•16mo ago
Thanks! This is great feedback. I've been thinking about what a Local-first library might look like on top of Convex that showcases how to do a more journal-based event log with materialized state, which could make undo / redo easy to implement. It's just hard to do fully generically in a way that's also ergonomic: most of the time you just want to read and write regular tables, but sometimes you want to capture everything as a delta with ways of materializing the current state on-demand.
holden
holdenOP•16mo ago
Sounds good, thanks for your response! Yeah, I read your single-flight article a couple of times. It was informative, but still hard (for me at least) to figure out the best approach. I couldn't use the optimistic updates feature, because I wanted my UI to be responsive so I needed to duplicate local state somehow, which was the tricky part. Adding a lodash throttle was easy once I had the rest figured out, but writing my own sync logic between local/convex state felt brittle. I don't have any strong opinions about an API for this. I've used both react-query (more features) and useSWR (simpler) and both are fine. If you had any sort of local cache you could read/write to and have it sync with the server in the background, it would probably work for me. Or if there was a way to use Convex with one of these libraries, that'd also be fine (but not sure if/how that could work). The two things I want are: 1) instant UI responsiveness, 2) don't flood the server with requests (make it easy to throttle). And if you come up with anything (library or helper) for undo/redo, I'd definitely be interested! My dream solution is basically hooks like these I can use in components: https://liveblocks.io/docs/api-reference/liveblocks-react#useUndo Anyway, thanks for the help!
API Reference - @liveblocks/react | Liveblocks documentation
API Reference for the @liveblocks/react package
ian
ian•16mo ago
I reviewed the article and realized the code snippet for optimistic updates seemed backwards- it wouldn’t update local state immediately. So I updated it:
const myMutation = useMutation(api.my.mutation);
const tryUpdate = useSingleFlight(withLocalStore);
const withLocalStore = useCallback((data) => {
setLocalState(data);
return tryUpdate(data);
}, [setLocalState, tryUpdate]);
...
const myMutation = useMutation(api.my.mutation);
const tryUpdate = useSingleFlight(withLocalStore);
const withLocalStore = useCallback((data) => {
setLocalState(data);
return tryUpdate(data);
}, [setLocalState, tryUpdate]);
...
with this, you should be able to call it and have it write to local state immediately and sync up the most up to date version in the background continuously. Is that closer to what you wanted?

Did you find this page helpful?