nahtnam
nahtnam3w ago

I assume if my app is AI heavy, I'd

I assume if my app is AI heavy, I'd spend a lot of compute gb-hours waiting for the responses? Is there some workaround for that?
3 Replies
nahtnam
nahtnamOP3w ago
To be more specific, it's not a chat where the data gets streamed to the user. More like a request gets submitted, processed via AI and stored in the db But they are pretty heavy requests, can take a couple of minutes
wes
wes2w ago
waiting for an llm response does not use billable cpu compute time i might be confusing this with cloudflare though. so let me double check.
nahtnam
nahtnamOP2w ago
do i have to use convex's LLM library? How does it even tell that its not using CPU

Did you find this page helpful?