Chris @ Terrain
Chris @ Terrain2mo ago

Scheduled jobs with large payloads

I receive data at an endpoint. I need to insert this data quickly and respond with a 200. Later, I'll do some additional processing and then write the original data to storage. That helps me keep large documents out of the DB. The trouble is that Convex has a per document limit of 1MB, and my payloads can reach 2-3mb. An alterative is I write to storage and not to the database. Then enqueue an action to retrieve from storage and write the subset of the data to the database. Is this all sounding like the correct way to handle things? Is writing to storage fast, dependent on the document size? It's important my API endpoint respond quickly to the caller.
2 Replies
Convex Bot
Convex Bot2mo ago
Thanks for posting in <#1088161997662724167>. Reminder: If you have a Convex Pro account, use the Convex Dashboard to file support tickets. - Provide context: What are you trying to achieve, what is the end-user interaction, what are you seeing? (full error message, command output, etc.) - Use search.convex.dev to search Docs, Stack, and Discord all at once. - Additionally, you can post your questions in the Convex Community's <#1228095053885476985> channel to receive a response from AI. - Avoid tagging staff unless specifically instructed. Thank you!
jamwt
jamwt2mo ago
yeah, you can write to storage pretty quickly. convex storage is S3 then yeah, having some background action to process and extract more structured metadata you put in the db from this storage is a good architecture

Did you find this page helpful?