pagination filtering
The solution to these issues (for us, users), imho, is adding an optional extra filtering ability to the
paginate
function, so we can do smth like this:
In this case, we'll end up with a correct cursor on the client, and mostly stable page size.
Maybe it would be very much like the same second option, but done by Convex internally, i'm not sure π€·ββοΈ
wdyt?9 Replies
you can combine filtering and pagination
and also indexes (including full-text-search indexes)
but it doesn't support arbitrary javascript in the filter. we've been considering adding something like that. thanks for the feedback!
yes, i know that you can do simple filtering with .paginate() π
out of curiosity, what kinds of filters are you doing? we might add features to the existing
.filter
(because it gives us more information about what fields matter, for subscription purposes), and we'd love to have more information common use-cases.Great thoughts, thanks for sharing.
You might be interested in a custom hook on the client side. usePaginatedQuery uses basic queries and stitches pages together. You could make a similar one that over-fetches and returns standard sizes, or transparently continues to fetch when no results come back. It could wrap or replace usePaginatedQuery. If thereβs a generically useful API for it, Iβd happily consider doing a Stack post on it and adding it to convex-helpers even if we donβt upstream the options into our official one
@Lee basically, everything that requires a so-called "computed" field data.
It seems like the source of this search for hacks lies in the lack of trigger in Convex. For me, triggers were\are the best feature of Firestore (or even the whole Firebase).
Imagine a healthcare app with patients submitting their vitals. Agents on the "frontline" should be able to see and react to the most urgent cases like, SpO2 falling below 90% and heart rate > 100bpm or smth like this.
Ideally, we'd want to trigger an alert in this case. But we'd also would like to bump that incident to the top on the agent's dashboard(that's a dynamic priority), but also, we'd like to filter out "normal" cases, or at least, show them somewhere below "urgent" cases.
And this is just the beginning of the dynamism required for query like this. Each user might have their custom "alert" thresholds for these cases... and so on and so on.
this is a very high-level scenario inspired by a real app π
yes, there're options to solve this in some way or another, but that's an opposite of what I think Convex is about π
you wrote about this here https://stack.convex.dev/convex-vs-firebase#end-to-end-correctness-philosophy
"We don't just want it to be possible to build a correct app but we want it to be impossible not to build your app correctly. Developers should fall into the pit of success."
and i think that's beautiful π»
@Dima Utkin triggers on subscriptions are absolutely a missing feature and already on our roadmap, so the good news is that it will happen
(and convex will magically become an awesome workflow orchestration framework)
my guess is that we won't get this done in the next two months but soon after that. your message is a good motivation π
@james that's awesome β€οΈ
In the meantime, if you want to have transactionally consistent "computed" data, here's what I would personally do (probably what you already had in mind but just in case):
1. Figure out what fields you want to query and sort over (e.g. a boolean for SpO2<90%&&HR>100, or a rolling average of XYZ).
2. Add the fields to your schema, with indexes on them so you can efficiently sort them to the top for an agent or filter out noise. They'll start out optional since historical data doesn't exist for them. At this point you can start optimistically using these indexes to filter and sort, for pagination and otherwise.
3. Define a function that takes in the raw data and produces data with the computed fields. Use this function any time you're inserting or updating the data.
4. Write a mutation to compute historical data in batches. You can run this whenever you change your definition of what derived fields you want to exist.
5. Once all historical data has the data, you make the fields required to avoid accidentally writing incomplete data. At this point the filters and indexes can be trusted to have complete data.
Extra credit: You could write a layer wrapping the DB that enforces this on a table-by-table basis.
Thanks @ian for a deep dive!
Yes, you can hack around and make things work with virtually any tool at hand. But in my case, this is just me reflecting on a real world scenarios, trying to give you folks more ideas for Convex 1.0 and beyound π
Planning everything upfront works, even if it requires some hacking, but things might fall apart with a new set of requirements... and those are coming... always π
And those changes usually require two things: migrations and\or more powerful and dynamic queries.
Anyway! these are just my thoughts to ignite your fight against SQL π