OpenAI Agent and Persistent-text-streaming with tool calls

I'm building a chat application using Convex with streaming responses. I have a streamChat HTTP action that currently works with a simple agent, but I need to integrate a more complex agent (unified_agent) that requires access to tools running in Node.js runtime (specifically Slack SDK tools). Current Setup: . streamChat is an HTTP action in Convex . I have a unified_agent that needs to use Slack tools requiring Node.js runtime . The agent needs to access conversation history and maintain context . I'm using the @openai/agents library's run() function The Problem: 1. Runtime Mismatch: HTTP actions in Convex run in the V8 runtime, but my Slack tools need Node.js runtime 2. Agent Integration: The unified_agent requires Node.js tools, but I'm calling it from an HTTP action 3. Context Preservation: I need to maintain conversation history while allowing the agent to use external tools Specific Questions: 1. How can I structure this so that the HTTP action can trigger an agent that uses Node.js runtime tools? 2. What's the best pattern for combining V8 and Node.js runtimes in Convex when using agents with external tool dependencies? 3. Should I split this into separate actions/queries, or is there a way to bridge the runtime gap within the agent workflow?
No description
6 Replies
TanishqxSharma
TanishqxSharmaOP3w ago
No description
Eliot Gevers
Eliot Gevers3w ago
AI agent component (by Convex) is probably best route for you. It handles saving messages/threads and you can easily define your node environment actions by creating a file like nodeActions.ts and adding "use node" at the top of it. Convex will make sure those actions are run in node, not their custom runtime. The agent component also takes care of streaming down the data
TanishqxSharma
TanishqxSharmaOP3w ago
I used it, the agent-to-agent handoff and then creating it as tool 'agentAsATool" is too complicated for my case and the workflows i am thinking to implement in future. Is that going to be an issue if I go ahead with openAI agents only
ian
ian3w ago
if you haven’t tried it in a while, a bunch of new docs that hopefully make it clearer. Agent to agent handoff is straightforward when there is a single thread. The agent as tool pattern is more advanced and only for when you need many steps in a separate but only record one step in the top-level thread.
TanishqxSharma
TanishqxSharmaOP3w ago
I was just going through the docs, and ran the example locally. I'll check if it covers it or not. We have multiple orchestrations with agents, to and fro If I can use openai.responses in persistent-text component. I'll not have to dive much deeper into this
ian
ian3w ago
One hiccup is you can't do an http stream from a node action at the moment. It sounds like the Agent could do with a more flexible API to add streaming data without using AI SDK node actions can call mutations/queries/default actions, and default actions can do the same. you just can't put default action/mutation/queries in "use node" files

Did you find this page helpful?