Starting with HTTP streaming
I have found myself quite confused in regards to convex environment variables, how they work and how to use them. I have been following along with this provided convex tutorial in my app:
https://stack.convex.dev/streaming-http-using-fetch
However, it seems no matter what I do I cannot get the evrionment variable LLM_API_KEY to work as intended. I have added the function llm.ts (provided here: https://gist.github.com/ianmacartney/53dafa51d37469534846105e39d99a25) to my convex functions folder. However, when I use the functions in my app, I get an error "ReferenceError: process is not defined." From the convex documentation, it seems that only convex functions (i.e.e queries, mutations, and actions) have access to environment variables, so I would need to rewrite the code as an action or create some helper function. But from the code provided this post it seems like there must be some easier and more elegant solution that I am just missing. Does anyone know what I am missing here?
Streaming HTTP Responses using fetch
Learn the basics of HTTP streaming with Convex by re-implementing OpenAI's SDK using built-in fetch and async iterators. No npm dependencies needed.
4 Replies
Hey @Alexander, have you gone through the Convex tutorial? https://docs.convex.dev/get-started
That shows how to expose logic to clients via queries, mutations, actions and http actions.
The stack article you linked might be a bit confusing to new comers because it doesn't really show how to use the mentioned code from Convex.
This one might be more what you're looking for: https://stack.convex.dev/ai-chat-with-http-streaming
Welcome to Convex | Convex Developer Hub
Convex is a novel, fun, and extremely productive way to make backends for your
AI Chat with HTTP Streaming
By leveraging HTTP actions with streaming, this chat app balances real-time responsiveness with efficient bandwidth usage. Users receive character-by-...
(Also btw @Alexander you can edit tags in Discord, and don't have to worry about the tags in general on our Discord, we'll take care of them. I deleted your repost)
Thanks for the response! I cloned the demonstrated ai chat you sent but found it to be quite buggy. So I stuck with my previous code which was working with a hard coded api key. To secure the API key I used a query within my component and passed it along to my llm.ts functions as a parameter. It is all working now!
u can try something like