No, I've had great experiences with assertions in code. People have paid my salary because the assertions are invalid and cause more problems than they solved. :D
> Failing an assert in production of course sucks and is costly. But what is more costly is letting the bug slip through and cause hard to diagnose bugs, program incorrectness and even (in some cases) silent address space corruption that will then manifest itself in all kinds of weird issues later on during the program run.
The direct counterpoint to this is that:
Any assertion that validates a runtime invariant can (and IMO should) be converted into a test which covers that same invariant, with coverage information proved by tooling.
This is possible unless the underlying design of the system under test is such that it prevents adequate testing, or your approach to testing is lacking. If you have those problems then asserts are a band-aid on broken practices. Moving quality checks to the left (design / compile time, not runtime) is a generally beneficial practice.
Put another way, I've seen many bugs which should have been caught cheaply early with adequate testing practice, rather than at runtime where they caused system failures. It's a rare bug that I see that that isn't the case.
Perhaps there are points where this broad recommendation doesn't apply. Safety engineering might be one of those, but the problem space of selling someone a widget over the internet rarely has that same level of need for runtime invariant testing that sending a rocket to space might.
---
On a different side of this, I do think that system level assertions (i.e. real code paths that result in actions not `debug_assert!` calls which result in crashing) can belong in systems to check that some process has reached a specific state. I prefer systems to be designed that don't (provably) crash ever.
---
A third side to this is that assertions are code too. They are a place which is rarely if ever tested (and is generally impossible to test because they cover invariants). This means that they're an unmitigatable risk to your system.
A thought experiment for you, what if LeftPad[1] (instead of being deleted) added an assertion that the total number of characters was < 10. Removal caused a bunch of pain for devs. Assuming that this change rolled out through development chains as normal, this change would have broken many runtime systems, and would have been much more costly.
"If you never have an accident you don't need seat belts in the car, and since we test drove the vehicle in the factory parking lot and didn't have an accident we decided not to have the seat belts".
Point being asserts are the final back stop. Your unit tests don't help you validate/test any real execution instance or function call that happens right now in production.
You're right in the sense though that if you have some functionality that is only ever called in a way that all the calls known ahead of time and you can test all the possible code paths and inputs then you can get away without asserts.
But I find that these scenarios don't manifest themselves that often. Most code executions are impossible to know 100% ahead of time and your unit tests are only ever testing a subset of all possible inputs and execution flows and even if you have 100% bullet proof coverage right now in the future you probably won't and then you're just one innocent change away from letting bugs slip through in your production runs.
> "If you never have an accident you don't need seat belts in the car, and since we test drove the vehicle in the factory parking lot and didn't have an accident we decided not to have the seat belts".
Would you trust a manufacturer that added seat belts, but never tested they worked? That's what a runtime assertion is. If it can never fail unless there's a bug, then they can never be tested...
Assertion failure modes are also problematic. Their entire mechanism is blow up and stop running the program. Would you trust a car which crashed if a seatbelt was unplugged?
The test of my assertion is: show me some real-ish code where you think runtime assertions are useful (preferably in backend / web code not a kernel or such).
Who says they can't be tested? Of course they can be tested, that's just a question of the testing tools being able run the process and trigger the assert and then realize that the process has in fact asserted and exited. If you think this can't be done then you really need to look for better tools. As an example if you program in C++ with boost.test you can even test that your code doesn't compile (can be useful for templates occasionally).
That being said I find that at least my asserts are most of the time rather self explanatory such as checking against array/vector size and they don't really require specific 'testing'.
"The test of my assertion is: show me some real-ish code where you think runtime assertions are useful (preferably in backend / web code not a kernel or such)."
That's simple, in C++ (that I program mostly in) any time letting the code execute would lead to undefined behavior, 10 times out 10 I prefer a controlled abort (an assert) with a core dump. For example going out of bounds on an array, what are your options? Pretend nothing is wrong, return a default value, throw an exception? All you can do with any of these options is to mask the actual BUG and cause Nth degree bugs down the road where the caller does something wrong since it it already went completely off track already. When 1+1=2 no longer holds it's best to stop.
"Would you trust a car which crashed if a seatbelt was unplugged?"
Assuming that by "crash" you really mean "controlled abort" (which is what an assert is, a controlled abort) yes, I would prefer my car would tell me in some controlled way when the seat belts no longer work rather than silently let me continue.
But that's not the same thing, seat belt not being unplugged isn't a BUG, it's a condition that the car software needs to be able to handle. You might be confused here because many people mix up logical error handling with BUGS.
"Would you trust a manufacturer that added seat belts, but never tested they worked? That's what a runtime assertion is. If it can never fail unless there's a bug, then they can never be tested..."
Yes who knows, any individual seat belt may malfunction but still the concept is much better than not having any. We don't go and say "oh, because any single seat belt might be broken it's pointless to have them at all". The same way we don't say "oh because some assert can be wrong (check the wrong thing or the wrong condition etc) it's pointless to have/use them at all". That would be just absurd.
I think we're really talking past each other at this point, so I'm probably not going to respond more on this. Maybe in C++ where you don't have better techniques available, then assertions *are* the best tool you can reach for for this sort of thing. In many other languages however we do have better options. These should chosen over using assertions when possible as the outcome is significantly better.
> For example going out of bounds on an array, what are your options? Pretend nothing is wrong, return a default value, throw an exception?
The article is talking about assertions in rust. The answer to that question in rust is to use `.get()` which returns `Option<T>`. This moves the condition where the array index is outside the bounds into a structured result rather than causing an application crash. An assertion that crashes the program would be useless there, as the language makes the type of error one that is idiomatically avoided. This (in addition to testing) is part of my point. Dig deep into the implementation of this in the std lib and there's no assertion, just a bounds check which either returns `Some(value)` or `None`.
The part I'm saying is problematic is not the check part of the assertion, it's the crashing part. Write software that avoids needing to crash by proving that the scenarios where invariants not invalid don't exist. When you do that any assertions which you include are code paths which are impossible to ever hit. This is by definition.
Expanding on the article example, it requires that the `youngest` variable is always >= 0. Just define that as `u8` and let the compiler be your check. You never need an assertion to test a tautology.
> Any assertion that validates a runtime invariant can (and IMO should) be converted into a test which covers that same invariant, with coverage information proved by tooling.
The better question is why should you need to (test the invariant)?
If your tests cover all branches of a loop, then it doesn't matter if a loop invariant is violated within the loop. Write tests that exercise the {0, 1, Some, Bounds} branches, verify that the result is correct. Then go on with your life knowing that there's no possible way that the code breaks without either failing the tests or changing the branch coverage.
Put another way (and taking the max() implementation from the wikipedia article. If I implement the max function in a way where in the first loop iteration I add 1 and in the last I subtract 1, then the loop invariant doesn't hold but the result is correct. The assertion approach fails, but the test approach succeeds because I have tests for {0, 1, Some} iterations.
: It would be silly to do this, but I'm trying to present a minimal counter-point here. You could extrapolate this type of change to a change made for performance purposes which are less obviously correct, and which may cause this type of internal invariant to be false. The argument is the same.
No, I've had great experiences with assertions in code. People have paid my salary because the assertions are invalid and cause more problems than they solved. :D
> Failing an assert in production of course sucks and is costly. But what is more costly is letting the bug slip through and cause hard to diagnose bugs, program incorrectness and even (in some cases) silent address space corruption that will then manifest itself in all kinds of weird issues later on during the program run.
The direct counterpoint to this is that:
Any assertion that validates a runtime invariant can (and IMO should) be converted into a test which covers that same invariant, with coverage information proved by tooling.
This is possible unless the underlying design of the system under test is such that it prevents adequate testing, or your approach to testing is lacking. If you have those problems then asserts are a band-aid on broken practices. Moving quality checks to the left (design / compile time, not runtime) is a generally beneficial practice.
Put another way, I've seen many bugs which should have been caught cheaply early with adequate testing practice, rather than at runtime where they caused system failures. It's a rare bug that I see that that isn't the case.
Perhaps there are points where this broad recommendation doesn't apply. Safety engineering might be one of those, but the problem space of selling someone a widget over the internet rarely has that same level of need for runtime invariant testing that sending a rocket to space might.
---
On a different side of this, I do think that system level assertions (i.e. real code paths that result in actions not `debug_assert!` calls which result in crashing) can belong in systems to check that some process has reached a specific state. I prefer systems to be designed that don't (provably) crash ever.
---
A third side to this is that assertions are code too. They are a place which is rarely if ever tested (and is generally impossible to test because they cover invariants). This means that they're an unmitigatable risk to your system.
A thought experiment for you, what if LeftPad[1] (instead of being deleted) added an assertion that the total number of characters was < 10. Removal caused a bunch of pain for devs. Assuming that this change rolled out through development chains as normal, this change would have broken many runtime systems, and would have been much more costly.
[1]: https://en.wikipedia.org/wiki/Npm_left-pad_incident