José AlvarezJ
Convex Community16mo ago
6 replies
José Alvarez

Advice on Convex & OpenAI Assistants API: Real-Time Reactivity, Data Redundancy, & Collaborative UX

I came across Ian Macartney’s post, "GPT Streaming With Persistent Reactivity," while exploring patterns for using Convex with OpenAI. Since the post is over a year old, I wanted to ask if the team has any new insights, particularly around collaborative user experiences powered by Convex and the OpenAI Assistants API.

While I was thinking about it, I felt uncomfortable about the idea of having two sources of truth. The Assistants API already stores a lot of information about threads, messages, and tool calls. It feels redundant to store the same information in Convex if I can access it through the API.

However, as Ian mentioned, browser-based HTTP streaming alone is unreliable for real-time reactivity, especially in a collaborative multi-user environment. A real-time database solution like Convex seems essential to achieve the required synchronization.

With OpenAI Assistants API, it’s also very annoying that processing messages (including tool calls) first during streaming (as shown in the Quickstart (https://github.com/openai/openai-assistants-quickstart/blob/06fc2d444a5d41b574082080f4c7b2e48156b84f/app/components/chat.tsx#L191) ) can’t follow the same logic in later browser sessions, because tool calls and messages come from different OpenAI API endpoints. During the stream, they’re processed together; but afterward, they’re separated. I managed to merge them by matching timestamps, but it feels wrong to have two distinct algorithms for handling the same output data.

Therefore, I’m convinced that using Convex as a bridge between my client and OpenAI is the right choice for my use case.

(continues below)
GitHub
OpenAI Assistants API quickstart with Next.js. Contribute to openai/openai-assistants-quickstart development by creating an account on GitHub.
openai-assistants-quickstart/app/components/chat.tsx at 06fc2d444a5...
Was this page helpful?