José Alvarez
José Alvarez
CCConvex Community
Created by José Alvarez on 10/18/2024 in #support-community
Can actions actually return a value?
The documentation is not very clear about whether actions can have a return value or not. If they can't, I'd apprecaite it if the documentation could acknowledge this clearly. If they can, I'd appreciate a clear example showcasing how to do this. For example, I would like to have this kind of action.
export const createThread = internalAction({
handler: async () => {
const apiKey = process.env.OPENAI_API_KEY!;
const openai = new OpenAI({ apiKey });

const thread = await openai.beta.threads.create();

return thread.id; // Also, could I return the thread object?
},
});
export const createThread = internalAction({
handler: async () => {
const apiKey = process.env.OPENAI_API_KEY!;
const openai = new OpenAI({ apiKey });

const thread = await openai.beta.threads.create();

return thread.id; // Also, could I return the thread object?
},
});
I am not interested in calling this action in the front-end (hence internalAction). I've seen in the demos on GitHub (https://github.com/get-convex/convex-demos) examples of actions being called in the front-end (https://github.com/get-convex/convex-demos/tree/main/vector-search) to get a return value, but not examples of actions returning something in the backend. Indeed, my seemingly only option to run an action in the backend is with the scheduler:
const threadId: string = await ctx.scheduler.runAfter(
0,
internal.openai.createThread
);
const threadId: string = await ctx.scheduler.runAfter(
0,
internal.openai.createThread
);
The above schedules the action to run in the future, which is not what I want.
I want to actually await the action and get its return value before executing the rest of the backend code (in this case, a mutation), is there a way to do this? I also tried to call openai.beta.threads, but the linter complains that this expression it's not callable. Originally I tried to just use a normal function, or what it's equivalent, just call openai.beta.threads.create() inside of my mutation, but I got this:
Uncaught Error: Uncaught Error: Can't use setTimeout in queries and mutations. Please consider using an action. See https://docs.convex.dev/functions/actions for more details.
at fetchWithTimeout [as fetchWithTimeout] (../node_modules/openai/src/core.ts:556:11)
Uncaught Error: Uncaught Error: Can't use setTimeout in queries and mutations. Please consider using an action. See https://docs.convex.dev/functions/actions for more details.
at fetchWithTimeout [as fetchWithTimeout] (../node_modules/openai/src/core.ts:556:11)
Thank you!
4 replies
CCConvex Community
Created by José Alvarez on 10/17/2024 in #support-community
Advice on Convex & OpenAI Assistants API: Real-Time Reactivity, Data Redundancy, & Collaborative UX
I came across Ian Macartney’s post, "GPT Streaming With Persistent Reactivity," while exploring patterns for using Convex with OpenAI. Since the post is over a year old, I wanted to ask if the team has any new insights, particularly around collaborative user experiences powered by Convex and the OpenAI Assistants API. While I was thinking about it, I felt uncomfortable about the idea of having two sources of truth. The Assistants API already stores a lot of information about threads, messages, and tool calls. It feels redundant to store the same information in Convex if I can access it through the API. However, as Ian mentioned, browser-based HTTP streaming alone is unreliable for real-time reactivity, especially in a collaborative multi-user environment. A real-time database solution like Convex seems essential to achieve the required synchronization. With OpenAI Assistants API, it’s also very annoying that processing messages (including tool calls) first during streaming (as shown in the Quickstart (https://github.com/openai/openai-assistants-quickstart/blob/06fc2d444a5d41b574082080f4c7b2e48156b84f/app/components/chat.tsx#L191) ) can’t follow the same logic in later browser sessions, because tool calls and messages come from different OpenAI API endpoints. During the stream, they’re processed together; but afterward, they’re separated. I managed to merge them by matching timestamps, but it feels wrong to have two distinct algorithms for handling the same output data. Therefore, I’m convinced that using Convex as a bridge between my client and OpenAI is the right choice for my use case. (continues below)
7 replies
CCConvex Community
Created by José Alvarez on 9/21/2024 in #support-community
Why is it that file names only appear when uploading through the console?
No description
12 replies