How scalable is convex realy?
e.g. facebook has 383 million facebook users ...
searchich a table of 383 user entries must be horrible in terms of performance, so they probably partition their user table and do all other kinds of optimization.
With convex on the other hand, who will do that optimization ? Is the backend so genius to just do it all by itself? Are we as developers in charge of making such big tables work (if so how)? Or is the convex team manually optimizing projects as soon as they hit a certain size and generate good revenue for the company?
I watched this youtube video of "How convex works" (https://www.youtube.com/watch?v=3d29eKJ2Vws)
Which explains that convex stores a lot of metadata and doesnt use plain tables....
So Im curious does this overhead slow it down compared to simple mysql or postgresql to a point where it cant handle 383 million users?
Also I found this interesting limit on the db " Documents scanned: 32,000 (Documents not returned due to a filter count as scanned)"
Does this mean we cant query tables bigger than 32000 documents?

