Running into Your request couldn't be completed. Try again later when loading a ~10mb PDF
I'm storing PDFs to r2 but then have to load in convex for further processing via Gemini. I have to first get the bytes from the r2 url, load it to PDFDocument via pdf-lib to do some pre-analysis and then pass it to gemini. Now, i could circumvent this by creating a cloudflare worker that does all of this processing upon upload, but before i take that route that would need a lot of refactoring, is there a workaround? am i doing something wrong? fwiw, i'm using node runtime already to make sure i have enough ram. also i'm on Free plan in case, my requests are being throttled. would appreciate help 🙏
8 Replies
Thanks for posting in <#1088161997662724167>.
Reminder: If you have a Convex Pro account, use the Convex Dashboard to file support tickets.
- Provide context: What are you trying to achieve, what is the end-user interaction, what are you seeing? (full error message, command output, etc.)
- Use search.convex.dev to search Docs, Stack, and Discord all at once.
- Additionally, you can post your questions in the Convex Community's <#1228095053885476985> channel to receive a response from AI.
- Avoid tagging staff unless specifically instructed.
Thank you!
i'm fine with upgrading as long as that's a typicl behavior but i'd like to know first if that's the case.
ram is the same for free and pro: https://docs.convex.dev/production/state/limits#functions
It does sound like whatever processing is being done takes more than what's available in an action, outsourcing to a worker makes sense
i have to check what am i doing, i can't just eat up 512mb on an 11mb PDF. too bad there are no logs to actually tell what's going. You request couldn't be completed doesn't really help. before i bite that bullet will run more checks. because i'm pretty sure my requests are being throttled. the same code was working without issues till yesterday. today the entire day i kept getting same error. it worked a couple times and that's all. convex is becoming a blackbox at this point.
update: did some benchmarking. i was at 166MB when the process got killed or when i got hit with the
Your request couldn't be completed. Try again later.
. i'll consider offloading this to cloudflare and will mark this as resolved.The "request couldn't be completed" message is super unhelpful, agree
So you are able to get some logs out before that message, right?
@erquhart yeah so the only logs i could get were the memory footprints i was logging until i got this error. see here:
Gotcha, I wonder how much of the ram is actually available vs the total
Also wonder if 166 was just the last ram usage it could report, but maybe it started using more in whatever step comes after PDF_LOADED and ran out of memory before it could report
the step afterwards is go call the gemini with base64 encoded pdf content. (i know it's crazy). i'll use gemini file api to upload to it directly, but still i feel something's odd. also i just got hit with a 502.