Spamming function calls on stream OpenAI responses

Hi, I was looking at the convex-ai-chat repo and found this
https://github.com/get-convex/convex-ai-chat/blob/main/convex/serve.ts#L70
      const stream = await openai.chat.completions.create({
        model: OPENAI_MODEL,
        stream: true,
        messages: [
          {
            role: "system",
            content:
              "Answer the user question based on the provided documents " +
              "or report that the question cannot be answered based on " +
              "these documents. Keep the answer informative but brief, " +
              "do not enumerate all possibilities.",
          },
          ...(relevantDocuments.map(({ text }) => ({
            role: "system",
            content: "Relevant document:\n\n" + text,
          })) as ChatCompletionMessageParam[]),
          ...(messages.map(({ isViewer, text }) => ({
            role: isViewer ? "user" : "assistant",
            content: text,
          })) as ChatCompletionMessageParam[]),
        ],
      });
      let text = "";
      for await (const { choices } of stream) {
        const replyDelta = choices[0].delta.content;
        if (typeof replyDelta === "string" && replyDelta.length > 0) {
          text += replyDelta;
          await ctx.runMutation(internal.serve.updateBotMessage, {
            messageId,
            text,
          });
        }
      }


Isn't a function trigger every time the a new token gets streamed?
GitHub
Contribute to get-convex/convex-ai-chat development by creating an account on GitHub.
convex-ai-chat/convex/serve.ts at main · get-convex/convex-ai-chat
Was this page helpful?