Aggregate (Convex Component) fails if multiple documents are added at once
If I add multiple documents at once, I get the following error:
Documents read from or written to the "btreeNode" table changed while this mutation was being run and on every subsequent retry. Another call to this mutation changed the document with ID "j97257gsj6gkr786bghe6nd2zn772qjw". See https://docs.convex.dev/error#1
Adding them one by one does not trigger this error.
Errors and Warnings | Convex Developer Hub
This page explains specific errors thrown by Convex.
14 Replies
Thanks for posting in <#1088161997662724167>.
Reminder: If you have a Convex Pro account, use the Convex Dashboard to file support tickets.
- Provide context: What are you trying to achieve, what is the end-user interaction, what are you seeing? (full error message, command output, etc.)
- Use search.convex.dev to search Docs, Stack, and Discord all at once.
- Additionally, you can post your questions in the Convex Community's <#1228095053885476985> channel to receive a response from AI.
- Avoid tagging staff unless specifically instructed.
Thank you!
Are you correctly awaiting the ‘insert’?
Also what loop are you using to run through all inserts
yes i am. i tried it using insert and also using trigger
can you show me the code bit where this is happening ?
import { Triggers } from "convex-helpers/server/triggers";
import { DataModel } from "../_generated/dataModel";
import {
balanceAggregate,
liquidNetWorthAggregate,
netWorthAggregate,
tokenBalanceAggregate,
tokenNetWorthAggregate,
usdcBalanceAggregate,
usdtBalanceAggregate,
} from "../aggregates/wallet";
const triggers = new Triggers<DataModel>();
// Wallet Aggregate Triggers
triggers.register("wallets", balanceAggregate.trigger());
triggers.register("wallets", usdcBalanceAggregate.trigger());
triggers.register("wallets", usdtBalanceAggregate.trigger());
triggers.register("wallets", tokenBalanceAggregate.trigger());
triggers.register("wallets", liquidNetWorthAggregate.trigger());
triggers.register("wallets", tokenNetWorthAggregate.trigger());
triggers.register("wallets", netWorthAggregate.trigger());
export const wrapDB = triggers.wrapDB;
whenever i insert multiple documents into wallets it triggers and therefore triggers the error (does not matter what the update is)Can you show the code for adding multiple documents at once, and the code for doing them one at a time? (Is there a Promise.all client-side or in an action or in a mutation?) OCC errors are possible when multiple mutations write the same data at the same time. See https://www.convex.dev/components/aggregate#read-dependencies-and-writes
Incidentally, this looks problematic, although probably not causing your problem:
export const wrapDB = triggers.wrapDB;
should be export const wrapDB = (ctx) => triggers.wrapDB(ctx);
await sequentialPromiseAll(
addresses.map(async (address) => {
const entry = await ctx.runQuery(
internal.functions.wallet._getByUniqueFields,
{
input: {
address,
chain: "solana",
},
},
);
if (!entry) {
throw new Error("Wallet not found");
}
await ctx.runAction(api.functions.wallet.updateWallet, {
input: {
address,
},
});
}),
);
it was promise all, but promise all basically could barely run because everything was failing. i switched to sequential (basically awaiting each one, and it still fails, but less.
i have tested and confirmed that this only happens if i send muliple transactions at once, and if i space it out it will also work. but for my project i cannot afford to purposely slow it down so much as i have a high volume of data.
is aggregate just not meant to handle high volume? if so then i would rather accept this early and work around it (just dont use aggregate). i have tried many ways to improve the situation by spacing out the transactions as much as possible but there seems to be no way to get it to work reliably especially with high volume of transactions.See the description of read and write dependencies. It can handle high volume in certain circumstances, like it you have a lot of data and high maxNodeSize, or if you use Namespaces, or if you do multiple writes within a single mutation
The code you pasted looks like it's parallelizing queries and actions. But aggregate writes can only happen in mutations. I recommend you do large batches of aggregate writes in a single mutation, and avoid parallelizing
i just converted my code to do all writes in a single mutation, and it still fails.
export const _updateMany = internalMutation({
args: {
input: zc(z.array(partialSchemaWithId)),
},
handler: async (ctx, args) => {
const { input } = args;
const updatedEntries = await Promise.all(
input.map(async (inputEntry) => {
const { _id, ...entryUpdate } = inputEntry;
const existingEntry = await ctx.skipRules.table(TABLE_NAME).getX(_id);
const updatedEntry = { ...existingEntry, ...entryUpdate };
if (isEqual({ ...existingEntry }, updatedEntry)) {
return existingEntry;
}
await existingEntry.replace(updatedEntry);
const entry = await ctx.skipRules.table(TABLE_NAME).getX(_id);
return entry;
}),
);
return updatedEntries;
},
});
and you're only calling _updateMany once?
yes
ok wait a moment
i think the new convex function was not deployed properly
let me double check
might have fixed it. i will report back the next time i do a high volume write (all in one transaction)
unfortunately...
Uncaught Error: Uncaught Error: Too many bytes read in a single function execution (limit: 8388608 bytes). Consider using smaller limits in your queries, paginating your queries, or using indexed queries with a selective index range expressions.
i will try to find some reasonable middle ground for batching
Sounds good. You should be able to split it across several mutations, as long as only one runs at a time
(in general, aggregate mutations can run in parallel, but not very many and there might need to be certain tweaks to avoid conflicts)
im not sure if it is worth not being able to send many in parallel reliably, since i can get aggregate by using a query each time too (as long as it doesnt exceed 16k documents), and the risk that data might be lost because my aggregate failed makes me very cautious to adopt it
That's fair. If you want guarantees that data won't be lost, you can use https://www.convex.dev/components/retrier or schedule the mutations directly. Scheduled mutations are guaranteed to run exactly once (but if they have a lot of conflicts, they might take a while)
Convex
Action Retrier
Add reliability to an unreliable external service. Retry idempotent calls a set number of times.