David Alonso
David Alonso5mo ago

Running Langfuse for AI tracing inside convex action

The issue I'm having is that I want to run Vercel AI + Langfuse inside a Convex action, so when I visited these docs: https://langfuse.com/docs/integrations/vercel-ai-sdk I leaned towards the Nodejs guide, but that comes with some performance losses on the Convex side (by having to use the use node directive), so ideally I'd use the nextjs guide but I'm not sure how to get that to work with Convex. the founder of Langfuse said:
We have tested the vercel ai sdk integration on vercel (which default to edge functions when using the app router in nextjs afaik) which should be a similar to the cloudflare runtime
cc @jamwt as this relates to a convo we had
Vercel AI SDK - Observability & Analytics - Langfuse
Open source observability for Vercel AI SDK using its native OpenTelemetry support.
11 Replies
David Alonso
David AlonsoOP5mo ago
I feel like understanding this will help me in the future with other packages as well...
ballingt
ballingt5mo ago
Can you say more, what are you stuck on? We've tested parts of vercel-ai-sdk on the COnvex runtime, so you shouldn't have to use "use node". Ah I see, maybe @vercel/otel doesn't work in Convex? What's your goal, to get open telemetry working?
David Alonso
David AlonsoOP5mo ago
My goal is to be able to use an llm sdk like vercel’s PLUS tracing inside convex’ js runtime The founder of langfuse is asking:
Is there general guidance on how OTel works with convex? Nothing about this should be Langfuse specific
ballingt
ballingt5mo ago
Reading about OTEL support in Vercel it sounds like it's custom: https://vercel.com/docs/observability/otel-overview
Quickstart for using the Vercel OpenTelemetry Collector
Learn how to get started with OTEL on Vercel to send traces from your Serverless or Edge Functions to application performance monitoring (APM) vendors.
ballingt
ballingt5mo ago
@David Alonso let's do a call or something to figure out how we can support what you want here. It sounds like you'd like deep Open Telemetry support in Convex, which is something we could do but is an investment; we'd want to understand what's causing you to want open telemetry in particular. Today this kind of observability comes from Convex log streams. If you want tracing support of some kind, open telemetry or not, then that's a different ask that we can figure out But if you need this now, I'd use a Convex Node.js action; it supports OpenTelemetry via a library You mentioned the additional latency of using Node.js, let's diagram this out too to see if the data you need could be batched or similar or hear what latencies you're seeing
David Alonso
David AlonsoOP5mo ago
right now we're doing LLM tracing with the proxy approach (changing the baseurl). The upside of doing it with async methods is: - trace non-llm calls and frameworks - eg retrieved documents in RAG or API calls - no uptime impact - no latency impact For these async methods it seems like I'd need OTel. Achieving this with Nodejs actions kinda cancels out some of the latency gains that this would give us since the Convex JS env has no cold starts afaik so far the baseurl approach works fine, but I was just curious if there was a better way given how ubiquitous LLM tracing is / will become
ballingt
ballingt5mo ago
Could you back up, what's the proxy approach and what are async methods here? LLM tracing: is this a specific feature of the vercel ai sdk? Or just in general, you want tracing for LLM stuff?
David Alonso
David AlonsoOP5mo ago
tracing for LLM stuff, I'm using the vercel AI SDK right now but these tracing libraries support most AI sdks e.g. https://docs.helicone.ai/getting-started/integration-method/openllmetry (async) https://docs.helicone.ai/getting-started/integration-method/vercelai (proxy) For langfuse (another tracing service): https://langfuse.com/docs/integrations/litellm/example-proxy-js (proxy) https://langfuse.com/docs/integrations/vercel-ai-sdk (async) Both of the above are YC startups btw I'm currently not heavily blocked by this, but I suspect other devs building AI apps and using AI SDKs from convex actions will run into the same
ballingt
ballingt5mo ago
Thanks for the background Are you trying to use/support both of these tracing services, or just one?
David Alonso
David AlonsoOP5mo ago
whatever is easiest currently that is Helicone's proxy vercel ai integration https://youtu.be/FMZRQhWRWkY?si=LXFBlabMFxMczMLr&t=133 this is what we're doing with this btw, helicone also offers caching which was triggered in this demo for instance
L42y
L42y3w ago
any chance we could have native otel support in convex runtime?

Did you find this page helpful?