Advice on Convex & OpenAI Assistants API: Real-Time Reactivity, Data Redundancy, & Collaborative UX
While I was thinking about it, I felt uncomfortable about the idea of having two sources of truth. The Assistants API already stores a lot of information about threads, messages, and tool calls. It feels redundant to store the same information in Convex if I can access it through the API.
However, as Ian mentioned, browser-based HTTP streaming alone is unreliable for real-time reactivity, especially in a collaborative multi-user environment. A real-time database solution like Convex seems essential to achieve the required synchronization.
With OpenAI Assistants API, it’s also very annoying that processing messages (including tool calls) first during streaming (as shown in the Quickstart (https://github.com/openai/openai-assistants-quickstart/blob/06fc2d444a5d41b574082080f4c7b2e48156b84f/app/components/chat.tsx#L191) ) can’t follow the same logic in later browser sessions, because tool calls and messages come from different OpenAI API endpoints. During the stream, they’re processed together; but afterward, they’re separated. I managed to merge them by matching timestamps, but it feels wrong to have two distinct algorithms for handling the same output data.
Therefore, I’m convinced that using Convex as a bridge between my client and OpenAI is the right choice for my use case.
(continues below)
