punn
punn2y ago

Timeout after 30 seconds on mutation

{"code":"RequestTimeout","message":"InternalServerError: Your request timed out."}
{"code":"RequestTimeout","message":"InternalServerError: Your request timed out."}
Getting this timeout well before the expected timeout (1 minute?). Is there a way to extend the timeout? Also, this was working for other scheduled actions that ran for much longer
15 Replies
punn
punnOP2y ago
deployment: knowing-emu-505
ian
ian2y ago
Is this from a query or mutation by chance?
punn
punnOP2y ago
mutation For this specific error, we were getting timeouts on batches of 8 relatively large objects. Started to work when making batch number = 5 but it seems very low for a timeout also was transiently getting 404s mixed in with the timeouts Are these 404s due to overloaded servers? When I get them I also get disconnected from log stream
ian
ian2y ago
That makes sense then. Mutations and Queries are transactions, so they need to have shorter timeout windows, to minimize write conflict windows. I'd suggest doing heavy work in an action, then writing to a mutation with the result. Sorry this is inconvenient. Since the code is running in a mutation, it doesn't need to be in a "use node" action, so they can all be in the same file, and the action can call the mutation with runMutation, and the mutation can kick off a background action with scheduler.runAfter(0, internal.myModule.myAction, args)
punn
punnOP2y ago
got it thank you for the clarification 404 page not found indicates server is down correct? Getting 404s a bit more than normal on functions that never threw 404s. Is there a server load issue atm?
ian
ian2y ago
Yeah that's correct - 404 seems like the wrong status code in this case. We're currently looking into the errors on your instance and will get back to you.
punn
punnOP2y ago
thank you
Sam J
Sam J2y ago
A couple clarifying questions - Are you seeing the 404s in the dashboard logs page or from somewhere else? Are the 404s also coming from the mutation you mentioned was timing out? Also - is knowing-emu-505 your production or dev deployment? Can you provide your other deployment name?
punn
punnOP2y ago
404s are in the dashboard logs but only appear after i reconnect to the log stream (I first see the error on Slack where I've set up logging) knowing-emu-505 is production deployment I can provide timestamps for the errors if those are helpful
Sam J
Sam J2y ago
Sorry for delay, I'm still getting used to discord. Timestamps would be helpful. Do you recall the type of the function that you saw the 404 in? Were they in actions?
punn
punnOP2y ago
some were actions, some were mutations
times in PST
7/12/2023, 5:44:42 PM
7/13/2023, 4:17:35 AM
times in PST
7/12/2023, 5:44:42 PM
7/13/2023, 4:17:35 AM
Sam J
Sam J2y ago
Thanks. I can see that actions were executed at both timestamps and that they ran under the time limits. One possibility is that the 404s were returned from an API called by your action. Sometimes that's not always super clear from our dashboard logs. There are a couple of errors at approximately those times that I'll look into, they just don't quite match the timeframe exactly.
punn
punnOP2y ago
On other occasions of 404s, I was disconnected from the dashboard logs. So I assumed it was due to the servers being overloaded. We generally see propagated errors for APIs called by actions, and we don't expect any 404s from them. Getting a lot of 404s now could someone have a look? Same issue from before, after setting a low limit for mutation batch size its fixed
sujayakar
sujayakar2y ago
hi @punn! sorry for the 404 issues -- the issue on our side is that a component is running out of memory and restarting, which then triggers that window of unavailability. we'll add some more telemetry on our side to find the memory issue, but can you describe what the mutations are doing? that'll help us narrow it down more quickly. feel free to DM me if you'd prefer.
punn
punnOP2y ago
@sujayakar Sent friend req!

Did you find this page helpful?