Laggy Data View / table
My MVP dashboard (database product) shows 2.1 GB memory usage for just 175 columns (10 index) and 102 rows — and scrolling through the data feels very laggy and janky.
even after scroll to end, if I scroll to up, still laggy!
or 175 columns too much?
gaming laptop: 16 GB RAM, Ryzen 5 8xxx cpu.
I can share my url to Convex team, if needed.
What Claude 4.5 said: https://claude.ai/share/e39495d8-7dbb-4835-924d-003f3c1c7fce
Screenshot / gif in comment:
React DevTools Highlight updates when components render
related src code on convex repo:
Performance optimization for data dashboard
Shared via Claude, an AI assistant from Anthropic
7 Replies
Thanks for posting in <#1088161997662724167>.
Reminder: If you have a Convex Pro account, use the Convex Dashboard to file support tickets.
- Provide context: What are you trying to achieve, what is the end-user interaction, what are you seeing? (full error message, command output, etc.)
- Use search.convex.dev to search Docs, Stack, and Discord all at once.
- Additionally, you can post your questions in the Convex Community's <#1228095053885476985> channel to receive a response from AI.
- Avoid tagging staff unless specifically instructed.
Thank you!
gif: React DevTools Highlight updates when components render
duration: 50 seconds

Looks like rendering does get slow if you have a lot of columns. We'll look into improving that
or 175 columns too much?...yeah. 175 is pretty high IMO. There's gotta be some way to split that data across multiple tables
I will wait.
is this possible to make a client for desktop? (Hard)
just like Redit Insighthttps://redis.io/insight/
or google sheet extension, so we can sync to sheet? (somewhat Easy)
I exported
npx convex export --path ~/Downloads and then converted jsonl to csv
imported to google sheet.
butter smooth. 🧈
100 rows, 500 mb memory usage/tab size.
then I copied 100 rows few times, until 8000 rows.
still smooth.We've limited the number of columns rendered by default, so it should feel quite a bit smoother now. We probably won't spend too much more time optimizing this use case for now because it's not too common for folks to have tables with many fields (denormalizing or nesting documents is usually a good approach)