sharp in a bun monorepo
I'm trying to use sharp for image processing and I see it in some examples so I assume it's possible to use in a node action, but whenever I try to bundle it using the --os and --arch params for linux arm64, my convex functions fail to deploy. I've updated my convex.json to mark 'sharp' as external. My situation is a bit complicated because of using bun and using a monorepo (so everything is in a node_modules at the root).
Anyone have any experience with bun/sharp/convex ?
7 Replies
Thanks for posting in <#1088161997662724167>.
Reminder: If you have a Convex Pro account, use the Convex Dashboard to file support tickets.
- Provide context: What are you trying to achieve, what is the end-user interaction, what are you seeing? (full error message, command output, etc.)
- Use search.convex.dev to search Docs, Stack, and Discord all at once.
- Additionally, you can post your questions in the Convex Community's <#1228095053885476985> channel to receive a response from AI.
- Avoid tagging staff unless specifically instructed.
Thank you!
If you have specific library requirements, I'd consider implementing some sort of non-convex external processing e.g. via work stealing
So, you have a bun process running somewhere that has a convexClient, and asks the server for work to do - and as soon as there is work to do (e.g. there's an image that needs processing), the bun process does it, and sends a mutation to your convex server with the results
Work Stealing: Load-balancing for compute-heavy tasks
Compare push-based load balancing with pull-based work stealing as scalable strategies for distributing resource-intensive workloads, such as running ...
for your specific deployment issues I can't help, because I'm not good with that myself - so "here's another approach that you can take" is the best I can offer, sorry
the good with thing with this approach is that you can have whatever dependencies without issue
disadvantage is that you'll set up your own ifra for that :/
Thanks! I'm trying to get a 600ms cropping job down to single digit ms (which sharp can do) so adding the additional network latency would probably negate at least 200ms of the savings.
in my instance of my self-hosted infra I incour around 10ms latency from work-stealing, which is nice
if the work-stealer and the convex backend are in the same datacenter, you should be <20-40ms in my experience of latency. That said, I get your concern
actually, i wanna bench it now
that's interestingly slower than I expected, esp. since work-stealer and convex backend are on same machine, and I use the susbcriptions, huh
potentially you can save 20-40ms if you don't do the mutation that assigns work to yourself, hm.
i wonder if I can be clever with that and improve that in my queue impl
Then work stealing would consistently only add like 10-30ms of overhead (with 15ms average or so), which is better than what i was doing before
tempting
Ah yes on your own machine but we are on convex cloud