Hitting 404 errors within actions
@punn has been running into 404 errors within an action that calls out to a mutation. these actions are either HTTP actions run from a webhook or scheduled actions (that live within the actions/ folder).
@punn, let me know I got that right + any other details you can share publicly here.
14 Replies
Yep that's correct. Seems like some sort of server overload when running many mutations within the actions. Doesn't look like they're coming from our external API call
404 huh, that's interesting! Naively I would think that meant "there's no mutation with that name."
Do you see logs in the Convex dashboard when this happens?
Was a similar issue to something I had previously where mutations with large objects caused a 404 and disconnected me from the logs
But these actions are quite small so was surprised to 404 from them
Doesn't look like they're coming from our external API callcould you say more, what does this mean — the actions calls are not coming from our action?
Sorry i mean that there's likely an error within the convex mutation/action rather than some failure from our external API
ah got it, thanks
Usually get
disconnected from the log stream
whenever the 404s happen, then I refresh to reconnect and see the 404s where I was disconnectededit: we found a different 404 issue, not likely this one. still working on it!
any luck? if you look at the deployment its usually the actions under pmsData.ts that have names ending in ‘webhook’ causing 404s
this instance seems to have gotten a lot slower. Even basic mutations from the client side take much longer than before. Is there an option to upgrade?
@punn is this mutations form the client side made over the websocket?
yep
but generally everything is performing slower
queries as well
getting
Transient error while scheduling a function: 404 page not found
too@punn We've got a crack team investigating your instance now. Do keep letting us know what you're seeing.
We'll update you when we make any changes
Thanks @Indy and team. Just getting intermittent server failures/downtime. gonna try to send everything to the scheduler in the meantime
I forgot to post an update to this thread. We followed up with punn online and resolved an issue where schema inference was slowing down for large types, e.g., objects with a large number of keys. We deployed a special-case fix in this case and will be rolling out a change for everyone to speed up schema inference. This change has already been in the works but we haven't released it yet.