OpenAI Realtime API + Convex - Any solutions?
Hey everyone! 👋
Working on real-time speech-to-text with OpenAI's Realtime API (wss://api.openai.com/v1/realtime) in a React Native
app using Convex backend.
Since Convex doesn't support WebSocket connections natively, wondering:
Has anyone found a way to integrate real-time WebSocket APIs with Convex?
Current challenge: Need to stream audio to OpenAI's WebSocket endpoint and get live transcription results back into
Convex database.
Questions:
- Any patterns for real-time audio processing with Convex?
- Alternative approaches you've used?
- Should I just fall back to chunked Whisper API calls?
Using React Native + expo-audio. Would love to hear your solutions!
Thanks! 🙏
1 Reply
Thanks for posting in <#1088161997662724167>.
Reminder: If you have a Convex Pro account, use the Convex Dashboard to file support tickets.
- Provide context: What are you trying to achieve, what is the end-user interaction, what are you seeing? (full error message, command output, etc.)
- Use search.convex.dev to search Docs, Stack, and Discord all at once.
- Additionally, you can post your questions in the Convex Community's <#1228095053885476985> channel to receive a response from AI.
- Avoid tagging staff unless specifically instructed.
Thank you!