Ali Madooei
Ali Madooei3w ago

Convex Node runtime vs Node

Hi! I have a code that works outside of Convex runtime. It is short and simple so I put here for reference. When this runs on Convex, I get "Connection lost while action was in flight" and the logs show "Your request couldn't be completed. Try again later." The latter seems like a response from one of the external services I'm using. However, running the same code, with the same API keys, outside of Convex works perfectly fine. Has anyone came by something like this before?
"use node";

import { streamText } from "ai";
import { internalAction } from "./_generated/server";
import { internal } from "./_generated/api";
import { v } from "convex/values";
import { createMem0 } from "@mem0/vercel-ai-provider";

export const completion = internalAction({
args: {
chatId: v.id("chats"),
prompt: v.string(),
placeholderMessageId: v.id("messages"),
temperature: v.optional(v.number()),
},
handler: async (ctx, args) => {

const mem0 = createMem0({
provider: "openai",
mem0ApiKey: process.env.MEM0_API_KEY as string,
apiKey: process.env.OPENAI_API_KEY as string,
});

const result = streamText({
model: mem0("gpt-4-turbo", {
user_id: args.chatId,
}),
prompt: args.prompt,
temperature: args.temperature || 1,
});

// Update the placeholder message with the full response
// as it comes in from the API
let fullResponse = "";
for await (const delta of result.textStream) {
fullResponse += delta;
await ctx.runMutation(internal.messages.update, {
messageId: args.placeholderMessageId,
content: fullResponse,
});
}
},
});
"use node";

import { streamText } from "ai";
import { internalAction } from "./_generated/server";
import { internal } from "./_generated/api";
import { v } from "convex/values";
import { createMem0 } from "@mem0/vercel-ai-provider";

export const completion = internalAction({
args: {
chatId: v.id("chats"),
prompt: v.string(),
placeholderMessageId: v.id("messages"),
temperature: v.optional(v.number()),
},
handler: async (ctx, args) => {

const mem0 = createMem0({
provider: "openai",
mem0ApiKey: process.env.MEM0_API_KEY as string,
apiKey: process.env.OPENAI_API_KEY as string,
});

const result = streamText({
model: mem0("gpt-4-turbo", {
user_id: args.chatId,
}),
prompt: args.prompt,
temperature: args.temperature || 1,
});

// Update the placeholder message with the full response
// as it comes in from the API
let fullResponse = "";
for await (const delta of result.textStream) {
fullResponse += delta;
await ctx.runMutation(internal.messages.update, {
messageId: args.placeholderMessageId,
content: fullResponse,
});
}
},
});
2 Replies
Convex Bot
Convex Bot3w ago
Thanks for posting in <#1088161997662724167>. Reminder: If you have a Convex Pro account, use the Convex Dashboard to file support tickets. - Provide context: What are you trying to achieve, what is the end-user interaction, what are you seeing? (full error message, command output, etc.) - Use search.convex.dev to search Docs, Stack, and Discord all at once. - Additionally, you can post your questions in the Convex Community's <#1228095053885476985> channel to receive a response from AI. - Avoid tagging staff unless specifically instructed. Thank you!
ballingt
ballingt3w ago
@Ali Madooei Do you not get any errors in the dashboard logs? Since you're using "use node", this is running on AWS Lambda. This might work fine in the Convex actions runtime, which is what you'd get if you remove "use node".

Did you find this page helpful?