naveedehmad
naveedehmad2mo ago

Running into Your request couldn't be completed. Try again later when loading a ~10mb PDF

I'm storing PDFs to r2 but then have to load in convex for further processing via Gemini. I have to first get the bytes from the r2 url, load it to PDFDocument via pdf-lib to do some pre-analysis and then pass it to gemini. Now, i could circumvent this by creating a cloudflare worker that does all of this processing upon upload, but before i take that route that would need a lot of refactoring, is there a workaround? am i doing something wrong? fwiw, i'm using node runtime already to make sure i have enough ram. also i'm on Free plan in case, my requests are being throttled. would appreciate help 🙏
8 Replies
Convex Bot
Convex Bot2mo ago
Thanks for posting in <#1088161997662724167>. Reminder: If you have a Convex Pro account, use the Convex Dashboard to file support tickets. - Provide context: What are you trying to achieve, what is the end-user interaction, what are you seeing? (full error message, command output, etc.) - Use search.convex.dev to search Docs, Stack, and Discord all at once. - Additionally, you can post your questions in the Convex Community's <#1228095053885476985> channel to receive a response from AI. - Avoid tagging staff unless specifically instructed. Thank you!
naveedehmad
naveedehmadOP2mo ago
i'm fine with upgrading as long as that's a typicl behavior but i'd like to know first if that's the case.
erquhart
erquhart2mo ago
ram is the same for free and pro: https://docs.convex.dev/production/state/limits#functions It does sound like whatever processing is being done takes more than what's available in an action, outsourcing to a worker makes sense
naveedehmad
naveedehmadOP2mo ago
i have to check what am i doing, i can't just eat up 512mb on an 11mb PDF. too bad there are no logs to actually tell what's going. You request couldn't be completed doesn't really help. before i bite that bullet will run more checks. because i'm pretty sure my requests are being throttled. the same code was working without issues till yesterday. today the entire day i kept getting same error. it worked a couple times and that's all. convex is becoming a blackbox at this point. update: did some benchmarking. i was at 166MB when the process got killed or when i got hit with the Your request couldn't be completed. Try again later.. i'll consider offloading this to cloudflare and will mark this as resolved.
erquhart
erquhart2mo ago
The "request couldn't be completed" message is super unhelpful, agree So you are able to get some logs out before that message, right?
naveedehmad
naveedehmadOP2mo ago
@erquhart yeah so the only logs i could get were the memory footprints i was logging until i got this error. see here:
:mag: MEMORY [INIT]: {
"step": "INIT",
"timestamp": "2025-06-25T17:09:21.273Z",
"memory": {
"rss": "72.68 MB",
"heapUsed": "17.52 MB",
"heapTotal": "23.03 MB",
"external": "1.69 MB",
"arrayBuffers": "0.48 MB"
},
"memoryRaw": {
"rssBytes": 76210176,
"heapUsedBytes": 18375792,
"heapTotalBytes": 24145920,
"externalBytes": 1776193,
"arrayBuffersBytes": 501914
},
}
:mag: MEMORY [FILE_URL_FETCHED]: {
"step": "FILE_URL_FETCHED",
"timestamp": "2025-06-25T17:09:21.813Z",
"memory": {
"rss": "92.41 MB",
"heapUsed": "18.99 MB",
"heapTotal": "34.28 MB",
"external": "3.3 MB",
"arrayBuffers": "0.5 MB"
},
"memoryRaw": {
"rssBytes": 96903168,
"heapUsedBytes": 19912864,
"heapTotalBytes": 35942400,
"externalBytes": 3462969,
"arrayBuffersBytes": 526161
},
}
:mag: MEMORY [PDF_LOADED]: {
"step": "PDF_LOADED",
"timestamp": "2025-06-25T17:09:26.553Z",
"memory": {
"rss": "166.72 MB",
"heapUsed": "33.55 MB",
"heapTotal": "62.89 MB",
"external": "26.87 MB",
"arrayBuffers": "24.08 MB"
},
"memoryRaw": {
"rssBytes": 174821376,
"heapUsedBytes": 35182968,
"heapTotalBytes": 65949696,
"externalBytes": 28178337,
"arrayBuffersBytes": 25249622
},
"totalPages": 40,
"documentInfo": "PDF document loaded into memory"
}
:page_facing_up: PDF Analysis: 40 total pages
Your request couldn't be completed. Try again later.
:mag: MEMORY [INIT]: {
"step": "INIT",
"timestamp": "2025-06-25T17:09:21.273Z",
"memory": {
"rss": "72.68 MB",
"heapUsed": "17.52 MB",
"heapTotal": "23.03 MB",
"external": "1.69 MB",
"arrayBuffers": "0.48 MB"
},
"memoryRaw": {
"rssBytes": 76210176,
"heapUsedBytes": 18375792,
"heapTotalBytes": 24145920,
"externalBytes": 1776193,
"arrayBuffersBytes": 501914
},
}
:mag: MEMORY [FILE_URL_FETCHED]: {
"step": "FILE_URL_FETCHED",
"timestamp": "2025-06-25T17:09:21.813Z",
"memory": {
"rss": "92.41 MB",
"heapUsed": "18.99 MB",
"heapTotal": "34.28 MB",
"external": "3.3 MB",
"arrayBuffers": "0.5 MB"
},
"memoryRaw": {
"rssBytes": 96903168,
"heapUsedBytes": 19912864,
"heapTotalBytes": 35942400,
"externalBytes": 3462969,
"arrayBuffersBytes": 526161
},
}
:mag: MEMORY [PDF_LOADED]: {
"step": "PDF_LOADED",
"timestamp": "2025-06-25T17:09:26.553Z",
"memory": {
"rss": "166.72 MB",
"heapUsed": "33.55 MB",
"heapTotal": "62.89 MB",
"external": "26.87 MB",
"arrayBuffers": "24.08 MB"
},
"memoryRaw": {
"rssBytes": 174821376,
"heapUsedBytes": 35182968,
"heapTotalBytes": 65949696,
"externalBytes": 28178337,
"arrayBuffersBytes": 25249622
},
"totalPages": 40,
"documentInfo": "PDF document loaded into memory"
}
:page_facing_up: PDF Analysis: 40 total pages
Your request couldn't be completed. Try again later.
erquhart
erquhart2mo ago
Gotcha, I wonder how much of the ram is actually available vs the total Also wonder if 166 was just the last ram usage it could report, but maybe it started using more in whatever step comes after PDF_LOADED and ran out of memory before it could report
naveedehmad
naveedehmadOP2mo ago
the step afterwards is go call the gemini with base64 encoded pdf content. (i know it's crazy). i'll use gemini file api to upload to it directly, but still i feel something's odd. also i just got hit with a 502.

Did you find this page helpful?