λx.1
λx.12mo ago

Network latency Pro vs Free

I am based in Germany and am a little lost on something. Initially we used the free plan. DX and stuff was superb. We are now about to release our app. While testing we experienced some sluggishness in regards to network response times on our end and shrugged it off as: Free tier now, we will go Pro anyway Summa summarum: We upgraded and hoped to see some improvements. Our expectation was dev deployments are being allotted less resources and prod will most likely be fine. Nothing changed though. Even the simplest queries where we retrieve a doc directly by id. I created a simple testbench which calls some functions of ours and timed the query/mutation/action calls and get the following result:
--- Starting Convex API Testbench ---

1. Checking if user '<redacted>' has 'read' access to project '<redacted>'...
'check_user_access' took 0.7853s
=> Access granted: True

2. Fetching user info for WorkOS ID '<redacted>'...
'get_user_by_workos_id' took 0.7779s
=> User Info: id=<redacted>, email=<redacted>

3. Fetching project details for project ID '<redacted>'...
'get_project_by_project_id' took 0.6383s
=> Project Details: {<redacted>}

4. Updating project '<redacted>'...
'update_project_by_id' took 0.7938s
=> Successfully updated project. New name: New Test Name - <redacted>

5. Fetching Stripe Customer ID for tenant '<redacted>'...
'get_customer_by_tenant' took 0.7705s
=> Customer ID: <redacted>

6. Fetching subscription ID for tenant '<redacted>'...
'get_subscription_by_tenant' took 0.7751s
=> Subscription ID not found (returned None).

7. Getting assets for project ID '<redacted>'...
'get_assets' took 0.7834s
=> Assets: <redacted>

--- Testbench Finished ---
--- Starting Convex API Testbench ---

1. Checking if user '<redacted>' has 'read' access to project '<redacted>'...
'check_user_access' took 0.7853s
=> Access granted: True

2. Fetching user info for WorkOS ID '<redacted>'...
'get_user_by_workos_id' took 0.7779s
=> User Info: id=<redacted>, email=<redacted>

3. Fetching project details for project ID '<redacted>'...
'get_project_by_project_id' took 0.6383s
=> Project Details: {<redacted>}

4. Updating project '<redacted>'...
'update_project_by_id' took 0.7938s
=> Successfully updated project. New name: New Test Name - <redacted>

5. Fetching Stripe Customer ID for tenant '<redacted>'...
'get_customer_by_tenant' took 0.7705s
=> Customer ID: <redacted>

6. Fetching subscription ID for tenant '<redacted>'...
'get_subscription_by_tenant' took 0.7751s
=> Subscription ID not found (returned None).

7. Getting assets for project ID '<redacted>'...
'get_assets' took 0.7834s
=> Assets: <redacted>

--- Testbench Finished ---
The associated execution times are shown in the provided screenshot. This is in prod with Python and consistent across multiple runs. I also did the same using the Rust ConvexClient and saw better, but still unsatisfying latency metrics. I will continue in my next message.
No description
3 Replies
λx.1
λx.1OP2mo ago
We initially thought this might be attributed to how the Python convex client is written (and Rust), so my mate spun up a local dev deployment and everything was instant, which more or less ruled out our suspicion that the implementation of clients could be responsible for this delay. The convex network-test gives me the following results:
✔ Deployment URL: https://<...>.convex.cloud
✔ OK: DNS lookup => 52.44.230.118:ipv4 (6.6ms)
✔ OK: TCP connect (107.6ms)
✔ OK: TCP connect (110.51ms)
✔ OK: HTTP check (213.15ms)
✔ OK: HTTPS check (319.7ms)
✔ OK: WebSocket connection established.
✔ OK: echo 128 B (107.5ms, 1.2 KB/s)
✔ OK: echo 4.0 MB (4.23s, 967.2 KB/s)
✔ Network test passed.
✔ Deployment URL: https://<...>.convex.cloud
✔ OK: DNS lookup => 52.44.230.118:ipv4 (6.6ms)
✔ OK: TCP connect (107.6ms)
✔ OK: TCP connect (110.51ms)
✔ OK: HTTP check (213.15ms)
✔ OK: HTTPS check (319.7ms)
✔ OK: WebSocket connection established.
✔ OK: echo 128 B (107.5ms, 1.2 KB/s)
✔ OK: echo 4.0 MB (4.23s, 967.2 KB/s)
✔ Network test passed.
These numbers are more along the lines of what I've anticipated for the queries we used. Any ideas or advice? And to get back to the title of this question: Is it even fair to expect a difference in latency between the Free and Pro plan?
ampp
ampp2mo ago
So if im reading things right the issue you see is with establishing the first connection and not the query/mutation time after a websocket connection is established? I haven't spent any time with python.
λx.1
λx.1OP2mo ago
No worries, I set this post to resolved since it was related to how Python handled multiprocessing and I was just way too deep into the night to realize, in hindsight, an obvious mistake. We are hitting 100-120ms response times now which is aligning with the physics of information prop (: and our expectation. And yes, ultimately it was the first request issued via the websocket and when I've written this I did not account for the fact that the delay incorporated WebSocket connection RTT + Query RTT + Convex function execution time. All good 💪 Convex rulezz 🚀

Did you find this page helpful?