Latency between Convex runtimes
Is the latency of calls from actions (node runtime) to mutations/queries (convex runtime) negligible?
43 Replies
idk if this question even makes sense, my mental model might be off
If you can, prefer Convex actions to Node.js actions. AWS Lambda (which powers Convex Node.js actions) sometimes has cold starts, Convex mutations, queries, and actions don't.
The network latency is negligable, but AWS Lambda cold starts are not.
If you're running into this we want to hear about it, it's not impossible to speed up AWS lambda by spending more money (keeping runtimes warm for you at a cost) but there's no button on the dashboard you can press to enable this.
Ah sorry, I didn't answer your question!
Is the latency of calls from actions (node runtime) to mutations/queries (convex runtime) negligible?It's pretty small. Calling from Lambda to Convex is cheap, but it's not negligible if you're doing hundreds of these calls. Instead write a mutation that does all hundred things.
gotcha, I'm calling LLM flows from this action so I think there's no way for me to speed this up right? (i.e. I need the node environment)
good to know, for now one action will not make more than 5 calls to the db, so I think it should be fine to skip the wrapping of these calls in a single mutation
the only alternative I can think of is to make these calls from the vercel server but then I'll probably have to wrap the mutations. What would you recommend?
What I'm trying to do is to have an agent with tools be able to both query and write to the convex db. I thought the best approach would be to have this agent on convex...
Doing this from a Convex Node.js action sounds great!
good to know, for now one action will not make more than 5 calls to the db, so I think it should be fine to skip the wrapping of these calls in a single mutationThe other thing to think about here is that if you combine these mutations into one, they'll happen in one transaction; they'll either all run or none of them will run. If you run them one at a time, there will be a period of time where one has finished but another hasn't, possibly producing inconsistent database state. Are you hitting an issue, is there a reason you want an alternative? Making these calls from a Vercel server is roughly equivalent to making them in a Convex Node.js action; Vercel also runs on Lambda. But the latency could be quite different, it depends where these lambdas are running. If you use Convex ones we know they'll be in the same AWS region.
okay good to know. There's just more latency than I was expecting but it could be due to an issue in my code so I'll follow up
It'd be useful if you could find where the latency is by console.logging
Date.now()
which will work as expected in a Node.js actionIs this well set up?
the als is to expose the Convex context to the tools
asking cause we're seeing elevated latency with deploying convex functions as noted here <#1271428583377600545>
Deploy time is all about bundle size, the code above looks reasonable. Are you using external packages for your big Node.js dependencies?
Also just to clarify, at the beginning of this thread you were talking about execution latency, and now you're talking deploy latency?
What kinds of times are you seeing?
yep, this is a different type of latency. After integrating with the vercel ai sdk and genkit (from firebase) we've started to notice functions taking significantly longer and how that degrades the DX. We're seeing times of 30s-1m+ but not for every single function update. These are some of the packages that the AI actions are using:
not sure exactly on how to exactly measure which of these is causing issues
Are you using external packages for your big Node.js dependencies?
not sure I get what you mean. The packages above are from our package.json so they are external packages
@David Alonso see this link https://docs.convex.dev/functions/bundling#external-packages
Bundling | Convex Developer Hub
Bundling is the process of gathering, optimizing and transpiling the JS/TS
ah that's probably it, we weren't using external packages then. I'm a bit confused though: in what cases wouldn't I want to mark all packages as external?
In the example you show
but I thought all external package imports had to be wrapped in require or import or is that not the case?
I'd try marking thing all as external. It's possible some won't work and if they don't you'd have to drill down to find which ones are a problem.
I don't follow the second question.
import SomeModule from "some-module";
looks like an import to me?ah sorry misunderstood, I'll give this a try and report back!
Okay I guess I'll have to pick..
is there an easy way to see the packages required by the node js environment?
i guess everything imported after "use node" across all files?
With this:
I still hit limits...
had to remove quite a lot of packages to get it to work, and it's tricky cause idl how heavy they are easily
any ideas on how to improve the DX here?
also when it succeeded I had to idea what the actual zipped size was and if there was room for more packages
we brought down function deployment down to 30s with this but it's still really hindering our development experience. For reference, this is the subset of packages I was able to add to the external list:
Doing npx convex dev with
--typecheck=disable
does not improve things btw
our project is not that big so I'm surprised we're already hitting these limits unless we're doing something horribly wrong somewheremy source map explorer.. This is after removing my convex.json and running:
Not sure what's going on so I'd appreciate your help!
but the total bundle size without external deps (no convex.json) is 17MB, yet when I do:
I get the error I shared above:
which seems off to me
Big picture, one thing that's going to help here is local development. This is something we're working on now, the idea is you push this stuff locally faster iteration.
We do want these deploys to be as fast as possible though, because there's nothing like having code in a prod-like environment for testing.
okay so you'd recommend we run convex locally? And there's nothing fishy going on in our project you think? wondering if this could be fixed temporarily by increasing the zipped limit, but maybe that's hard for you guys
afaik running convex locally comes with no dashboard which would hurt our speed in other ways, so I hope we can find a better solution...🥹
oh that could be something we're working on 🤫
haha happy to hear that, but not sure how far into the future that is so what would you recommend as a temporary solution?
my impression was in the next month or two.. i hope.. luckily, we aren't focused much on LLM's right now. I was hoping it would be possible to use one LLM sdk not to bloat bundler/convex actions as all these inputs and outputs are essentially the same format. But your imports are not giving me any faith that will be in my future. 😅 Maybe some proxy to provide one standard data interface?
@ampp are you trying to run the LLMs in COnvex? LLM SDKs like OpenAI run great on Convex — it's just when you try to do inference with Tensorflow in an AWS Lamba that you're running into issues, right?
I'm not having issues.. yet, i don't want to make my bundling time too long right now. Our long term goal is to allow all our users to "bring your own LLM" and input any api key, ideally we would support every LLM. And i need to decide what the most efficient way to do that is.
I'm still not sure what the best short-term course of action is, or how long we should wait to be able to have a good DX when running convex locally, would appreciate some more info @ballingt 🙏
we're spending 10+hrs a day on convex atm building https://fireview.dev so these time savings are a big deal for us
Fireview - The Firestore Console You Deserve
Fireview helps your team manage and visualize your Firestore data with ease.
One thing I'd try is to stop Node.js actions for everything you can; the "edge runtime" versions of these libraries are usually lighter. Was there a problem using @ai-sdk libraries with the Convex actions runtime?
We have an action item to look into these zipfile sizes, why when you use
"externalPackages": ["*"]
that gets so much larger. Likely it's because external packages are not bundled so they're much larger.@David Alonso dm me or create a support ticket if you all want to Zoom about this some time with the team and brainstorm in a faster-iteration forum
we definitely don't want you all to have a painful dev experience. we're obviously kind of trying to do exactly the opposite 🙂
we can also share more details about the timeline for local dev. the internal alpha just landed for us, so our team is starting to play with it
thanks Jamie, just texted you, hopefully we can chat soon!
great to hear!
shoot I wish I'd understood this from the get go. When glancing over the docs, specifically this code snippet:
My understanding was that I needed to include "use node" whenever I was using an NPM package but now i realize it's only on unsupported NPM packages. If I'd known this I'd just never add use node unless Convex complains
Maybe this fixes all our issues for now, since I actually haven't tried if these packages run in Convex' environment
just to have a sense, what rough percent of npm packages are unsupported?
this warning also made me think that most actions had to be run in nodejs envs:
Ah we should remove that! It used to be that if you make a directory called
convex/actions
it would automatically be a "use node" action.
well for compatibilty we can't remove it, but I understand why it would be confusingshould i ignore it for now or does it actually prevent code deployment?
or just rename the folder to something else
souns like it's just a warning?
What percent, let's see. It's anything with a native extension (use node-gyp or downloads a binary)
seems like a blocker... npx convex dev -v:
Ah yeah, sounds like you'll have to move it. I didn't see "require"
We should build a database of libraries known not to work, but if the npm name of the library has "node" in it that's often a bad sign.
If it works in Vercel edge runtime it probably works in the Convex runtime. It's a different implementation but we've implemented most of the same APIs.
Most libraries should work in the Convex runtime and if it doesn't we'd love to hear about it.
i get a ton of these issues when I comment out use node and it's usually dependencies of packages we're using which makes it hard to find the files that need the directive and the ones that don't...
like here I'm not even sure which of our packages is using agent_base
yeah it requires closer attention to dependencies as you add them, definitely takes some work to go through them all
this is probably for proxy-agent, which is only for Node.js
The AI files typically have these imports, where opentelemetry is used for tracing in Langfuse which i think requires unsupported packages
as you said, has node in the name
Let's start a doc, do all of these require node?
There are three (four?) categories:
1) libraries known to work with the Convex JS runtime
2) libraries known not to work in the Convex JS runtime
3) libraries that dont' work in "use node" files either (usually this is anything that doesn't work on AWS Lambda)
4? unknown
"ai" should be fine, we've tested most of that
convex-helpers/server/zod is fine
"@ai-sdk/openai
I hope is fine, curious if you see isues wiht that
same with the rest of @ai-sdk
since it's known to work in the Vercel edge runtimeIssue is with the telemetry files which i need for tracing and the genkit packages
I can get rid of genkit if we use the vercel ai sdk, but if I want to do tracing I’d still need use node right?
still running into this issue btw!
we forgot why we named our actions folder
action
and then we ran into it again, no biggie though but it could cause confusionwhat issue, this warning?
yep, but it's an error for me
Ah sorry, yeah error. Yeah this is still unfortunate, it may be long enough ago that we can change this behavior! I'll file in, it's always risky because someone on an older project could be expecting everything in the actions folder to automatically be a node action and we don't want it to be confusing when they upgrade. But it's not hard to fix, so probably safe.