how do i use convex with vercel's ai sdk
given that convex is entirely separate backend, how do i do this ?
15 Replies
Follow these instructions:
https://sdk.vercel.ai/docs/getting-started/nextjs-app-router#create-a-route-handler
But use an HTTP Action instead of a Next.js POST route:
https://docs.convex.dev/functions/http-actions
Let us know how it goes 🙏
HTTP Actions | Convex Developer Hub
HTTP actions allow you to build an HTTP API right in Convex!
Getting Started: Next.js App Router
Welcome to the Vercel AI SDK documentation!
@sshader has been looking into spinning this up, and we might be missing some runtime dependencies right now. Stay tuned for her update.
We're working on getting our HTTP actions to better support dropping in vercel/ai, but in the meantime, it's possible to make your own HTTP action handler that does something similar without using the vercel/ai helpers.
https://github.com/sshader/streaming-chat-gpt/blob/main/convex/http.ts has an example of an HTTP action streaming a response from ChatGPT (heavily based off of https://developers.cloudflare.com/workers/examples/openai-sdk-streaming/). This would let you use the client side parts of vercel/ai without the server side helpers.
We also have https://stack.convex.dev/gpt-streaming-with-persistent-reactivity describing how to stream AI responses to multiple users through the database. You should be able to use the server-side vercel/ai helpers in a node action (https://docs.convex.dev/functions/runtimes#actions) and then read your AI responses from the database using Convex queries
GitHub
streaming-chat-gpt/convex/http.ts at main · sshader/streaming-chat-...
An example of streaming ChatGPT via the OpenAI v4.0 node SDK. - sshader/streaming-chat-gpt
GPT Streaming With Persistent Reactivity
Stream GPT responses without brittle browser-based HTTP streaming.
Multiplayer reactivity, persistence, reactivity via Convex. Using OpenAI’s Node SDK...
Runtimes | Convex Developer Hub
Convex functions can run in two runtimes:
streaming doest work apparently
good to know, preferably i dont want to maintain much code for streaming ai response. Any ETA when will you release an update to make convex work with vercel's ai sdk.
To be clear, streaming itself does work (and the repo I linked above is an example of this). The
streamText
helper from vercel/ai
does not currently work in the Convex runtime, and I'll update when we add supportIf you are going to support streamText, can you consider supporting streamObject, too. I’m trying to parse a streaming json response.
Can you explain more about the problems I would run into trying to stream a JSON using the Vercel AI SDK?
1) Why is it important to use an HTTP action instead of a Next.js POST route?
2) Are you saying that the Vercel AI SDK does not work inside of a Convex function?
3) If so, is it possible to just avoid using Convex functions until the stream is complete, and then I can save the JSON via a Convex mutation function?
@Matt Luo Yes, you can use the Next.js server to stream to the client and at the end store the result by calling a Convex mutation.
When we add the support you won't need the Next.js server at all.
What advantage do I get by avoiding the Next.js server to stream?
any ETA for this support? will you ping us when it arrives? thx!
Simplicity of a single server and lower write latency.
I see. Then it will be important for my use case that the Convex runtime supports the Vercel AI SDK. Zooming out, I need an easy way to handle the streaming, e.g. an async iterator. Within each loop, I need to do read and writes to the Convex DB.
@Matt Luo If you’re gonna write to the db on each loop iteration, can you read from the db instead of using streaming? This is the classic “convex” way and it is simpler and more consistent (since all clients see the same response).
Thanks Michal. Yes, I agree that using a convex query is simpler. Would you say that if I’m going to write to the db per each streaming loop anyway, I may as well use convex functions since the database bandwidth cost would be the same?