Designing Systems for the Worst Case
New Databased Episode -- Five weird tricks for creating robust systems
James and I talk through some of the principles/idioms we've leaned on over the years building things together, including at Convex.
If you want to geek out on software design and all that stuff, give it a listen! If you think we got something wrong, let us know in the ๐งต .
https://www.youtube.com/watch?v=RcofAOF0Icc
Convex
YouTube
Designing Systems for the Worst Case
In this episode of Databased, Jamie Turner and James Cowling explore critical principles of systems design, emphasizing the importance of preparing for worst-case scenarios rather than best-case outcomes. They dive into the concept of congestion collapse, illustrating how systems can fail under pressure and the need for robust designs that maint...
11 Replies
Discuss here! What do you think?
Great episode! Some thoughts:
+ Really enjoyed this one. It seemed to hit many of the points and pains that I have personally experienced so really resonated with me.
+ You guys did a great job of pre-empting my next question throughout the episode too. For example giving examples when something wasnt 100% clear.
+ You guys briefly mention that you are both skeptical about Unit Tests, I would love to hear more on this. @jamwt talks about this a bit more the end but I would love to hear more about how this applies particularly around Convex Queries and Mutations and what your thoughts are in general too.
+ I loved the discussion on typescript and red squiggly lines and made me wonder if we are heading to a future when we can have "red squiggly lines" but for your entire system.
This makes me think about the recent Lex podcast with the Cursor team (https://youtu.be/oFfVt3S51T4?t=3049) where they are talking about agents and shadow workspaces (https://www.cursor.com/blog/shadow-workspace), TLDR; you could have an AI agent that is continually running the the background investigating potential issues with your system.
+ Throughout the episode I was continually thinking how one could take clips of the points raised and then show how Convex specifically helps with and / or solves the issue raised ๐
thanks for the feedback!
It was fun hearing about your past experiences
Thought it was interesting about prioritizing fixing corrupt data and legacy workarounds. Maybe that is even more important in the era of programming with LLMs and remote work
I'm going to take the transcript of this, load it into context with an entire codebase and ask Claude to give real world implementation schematics based on the philosophy / principles discussed
Specifically interested to see the suggested Verifier workflows to maintain data/systems integrity overtime
Excellent show as usual
I'll also scaffold towards it answering how Convex can be uniquely leveraged to maneuvering the discussed issues
You guys briefly mention that you are both skeptical about Unit Testsunit tests tend to test the most easily verifiable part of systems, and empirically are usually not catching why things go wrong in practice (relationships of components in the large)
and if you have some code that feels like a good fit for unit tests, it's probably b/c it's algorithmically interesting in terms of inputs or outputs. if that's the case, I highly suggest randomized property testing instead using something like QuickCheck https://en.wikipedia.org/wiki/QuickCheck for decent coverage
QuickCheck
QuickCheck is a software library, a combinator library, originally written in the programming language Haskell, designed to assist in software testing by generating test cases for test suites โ an approach known as property testing.
manually anticipating test cases is really hard; stating output invariants is more effective in practice, and then let "the machine" blitz through millions of cases and corner cases to test your invariants
if you're in a dynamic language, unit tests are effectively a type system, and that does add some value wrt regresions, admittedly. so most of these assertions are assuming you're operating in a system which already provide basic type annotations and checking, in which case the value of unit tests is lower
I see, so you arent against the idea of testing per-se its more that you feel that there are better ways to do it by using something like quickcheck to raise the abstraction level and being more explicit about the invariants you are testing for. Your Haskell background definately helping out with this one I suspect ๐
yes, and those are mostly only useful for functions which are algormically complex
most functions just transform a type to a type as part of a layering thing and so aren't usually that interesting to unit test
in terms of there being a finite number of human hours on your team and there are so many other quality-pursuing efforts you could be doing ๐
Oh yes very much agree. The tak I have taken to testing is "do enough to give you confidence".
I do feel that the nature of Convex's effect-free queries and mutations it does help with minimizing many of the common footguns thus reducing the need for as many tests much like strong typing does with dynamic langauges.