Joe
Joe2y ago

adjust max timeout on scheduled functions?

I'm pinging an API to get some info, but depending on the subject of the request I send, the amount of information can vary a lot. Most of the time the scheduled function default max time is fine, but occasionally it is not. Is there a way to adjust the max time before time out or should I instead say if timeout try again?
6 Replies
jamwt
jamwt2y ago
just to clarify, this is actions hitting the 120s timeout?
Joe
JoeOP2y ago
I think so here's what I'm seeing in my log
Joe
JoeOP2y ago
No description
Joe
JoeOP2y ago
i think i saw data still being pulled when timeout occurred too im querying github for info so repo size varies a lot which can make the runtime change i think it was easy enough to just run again but i wasnt sure on what would make most sense here
jamwt
jamwt2y ago
makes sense. just wanted to double check so, first answer is: we're working on supporting longer running actions. it's not unreasonable to want to just run things that take longer than 120s so you don't have to break work into chunks artificially or whatever. we'd like the natural workflow to Just Work in convex as much as possible but before that happens, is it possible to break your work into batches of pulls or something? you can even do something like have two phases of work... (1) generate jobs (per repo?) and (2) schedule actions for each job or each batch of N jobs maybe this is what you're doing, and the issue is a single repo takes longer than 120s sometimes? the workflowy answer here would be (1) generate jobs and represent the desire to do them into a job table with a status like "not done yet", that's written via a mutation and then (2) have some actions that pull jobs from this queue so they can ensure everything eventually gets done, even on transient failures. when the job is complete its status is changed for (2), at some point in the future convex will have "predicate dependent actions" almost like a subscription (useQuery) on the client but for now, you can fake this via cron jobs convex is working toward having really great tools for workflow like this, but we're unfortunately not completely done with the primitives / libraries / and docs around how to model these things with excellence + ease
Joe
JoeOP2y ago
that makes sense. For context, I try to query something like one repo for one type of repo information per call. It's just for some repos that can be a lot of information. I have to track some metadata anyways about where I am in the request, so I think I can probably use that to help the function figure out where to restart if it times out. I just havent run into timeouts too much before so wasn't sure if there was an easy workaround / normal practice, but I should be okay if that can't be adjusted. I can use a cronjob like suggested.

Did you find this page helpful?