Good post, for what is worth Java is slowly and painfully correcting course with features like records and project Valhalla. As with the other language improvements though we will have to live with the tech debt for decades to come…
While C# certainly has tech debt too, the specific feature the author mentions (value types) has been there since version 1. C# pioneered async/await in version 5, and has a plethora of functional features like generics, type inference and pattern matching now. Java is lagging behind like a wounded cow...
Plain copper is already too expensive for power lines, and this alloy is more expensive than copper alone. Transmission and distribution lines are typically made with aluminum conductors layered over a steel core for mechanical strength:
Aluminum is a worse conductor than copper on a volumetric basis but a better conductor on a mass basis, which is important for overhead lines supporting their own weight against gravity. It also costs significantly less than copper.
My point was that maybe the improved mechanical performance would help with their own weight. I concede that a more expensive, heavier overhead line that could hang for longer distances does not sound good unless costs are dominated by tower construction
That's why Typescript/Python optional typing hit the best balance for me. Coding in duck-typed language is generally fine when your test suite is as fast and frequent as a type checker. That also explains why TDD is more popular in say Ruby or Python vs. Java.
Speaking of Java, the problem with types is when you try to reify every single problem you encounter in your codebase.
By the way, python has structured types since 3.8, and I hope they get more popular in Python code:
https://docs.python.org/3/library/typing.html#typing.Protoco...
> That also explains why TDD is more popular in say Ruby or Python vs. Java.
I'd say that TDD being more popular in untyped languages speaks against TDD, as it hints that maybe some of its benefits are covered already by a type system.
You did clarify latter a bit, but this cannot stand unchallenged. TDD and tests solve different problems from types and so are valuable for that. Tests assert that no matter what you change this one fact remains true. Types assert that you are using the right things in your code.
I don't think it is lack of types at fault for untyped languages liking TDD (though I miss types a lot). I think it is there is no way to find out if functions exist until runtime (most allow self modifying code of some form so a static analysis can't verify without solving the halting problem). Though once you know a function exists the next step of verifying the function (or an overload in some languages) exists does need types.
The biggest proponents of TDD I’ve seen are only capable of writing code that one cannot trust in the absence of tests. Writing tests is good, striving for 100% coverage contorts code in ways that are detrimental to robustness and correctness.
types are just autoverified logic. tdd just tests logic which cannot be typed in given type system. in lean4 one can type a lot(dependant types to test integration shapes and proofs are proptests).
It's worth nothing that type checking can also verify things that cannot reasonably be verified by tests. Things like exhaustiveness checking ("you handled every possible value of this enum") or, even simpler "you didn't attempt to access a property that does not exist on this object."
TDD also asserts that if you make a change you don't break anything. Most programs are too complex to keep all behavior in your head so sometimes what looks like an obvious change breaks something you forgot about. Types won't tell you because you adjusted the types, but the test will tell you. (if you have the right tests of functionality - a very hard problem outside the scope of this discussion)
The person you're replying to mentioned Lean4. In such a language, types can definitely assert that a change didn't break anything, in the sense that you can write down the property you want as a type, and if there is an implementation (a proof), your code satisfies that property.
Now, proofs can be often devilishly hard to write whereas tests are easy (because they're just examples), so in practice, types probably won't supplant tests even in dependently typed languages.
Proofs are impossible in many cases because nobody really fully understands the requirements, they are sort of working them out as they go. (and in any case they will change over time). That might be what you meant by devilishly hard to write.
Tests let you instead say "with this setup here is what happens", and then ensure that whatever else you change you don't break that one thing.
To my knowledge nobody has scaled proofs to very large problems. I still think proofs should have a place in better code, but I can't figure out how to prove anything in my real world code base. (I could probably prove my languages basic containers - something that itself would be valuable!)
> That might be what you meant by devilishly hard to write.
No, that's a separate problem. I agree that we don't always know what we need our code to do very precisely - although I think we could still increase the number of known invariants/properties in many situations - but even when you do, a proof is often hard to write.
Proofs also typically have the problem that they're not refactoring-proof: if you change the implementation of function f (but it still does the same thing), tests won't have to change, but a proof would have to be rewritten (the type would stay the same though).
i am coming from rust.
writing a lot of narrowing wrappers/type states/proptests/const and runtime asserts/expects of possible proofs.
i am targeting to do these first.
for big things wiring many things together, will use normal tests(given lean4 allows to reflect on io graph, i guess can have some fun here too)
Those are not proofs. If you have a formal proof tool (I'm not aware of one for rust, but there is a lot of progress in this area) they feed and and sometimes tools can prove things. Though beware, there are limitations to this approach - sometimes they would have to solve the halting problem to say code is correct (though if they say code is wrong they are right), other times the problem is solvable only on a computer that doesn't exist yet.
Types and const are a form of formal proof - if you don't cast those away (I'm not sure what rust allows, particularly in unsafe). However there is a lot more potential in formal proofs. Rust's borrow checker is formal proof, and there are safe things you might want to do that the borrow checker doesn't allow because if it was allowed rust could no longer prove your code memory safe (a trade off that is probably worth it in most cases)
> (I'm not sure what rust allows, particularly in unsafe).
You can't "cast const away" in Rust, even in unsafe. That is, you can do it in unsafe, but it is always undefined behavior to do so. (I am speaking about &T and &mut T here, const T and mut T exist and you're allowed to cast between those, as they have no real aliasing requirements and are really just a lint.)
It's blatantly obvious that some of the benefits of extensive testing are covered by a type system. Even by a mostly useless one like Java's.
If you look at any well tested program in a dynamic language, almost all the tests check the same properties that a type system would also check by default. If you remove those, usually only a few remain that test non-trivial properties.
EDIT: And I just love that in the time I took to write this, somebody wrote a comment about how it isn't so. No, it is still blatantly obvious.
Id say if you think tests and types are doing the same thing in the same way you are badly abusing at least one of them.
One attacks the problem of bugs from the bottom up and the other from the top down. They both have diminishing returns on investment the closer they get to overlapping on covering the same types of bug.
The haskell bros who think tests dont do anything useful because "a good type system covers all bugs" themselves havent really delivered anything useful.
> The haskell bros who think tests dont do anything useful because "a good type system covers all bugs" themselves havent really delivered anything useful.
I'm a Haskell bro and I love testing. You misunderstand me, though. All I say is that maybe _some_ of those tests deliver value by just making sure that code even runs, which is otherwise covered by types.
When I do TDD (virtually every time i write a line of code) each test scenario isnt just a way to verify that the code is working, it's also a specification - often for a previously unconsidered edge case.
Throwing away the test means throwing away that user story and the value that comes with it.
I believe you (other than tests being specifications, they are examples at best). But that doesn't change the fact that TDD looks more adopted in untyped languages, and that deserves an explanation.
Mine is that a lot of potential errors (typos, type mismatches) don't need to be exercised by running code in typed language.
I suppose you could do something like, enumerate every possible combination of inputs and check that some property holds for all of them. Or, maybe you could instead randomly select a number of combinations of inputs and check that a property holds for each of those random combinations, but that wouldn't be guaranteed to find the inputs for which the specification isn't satisfied.
I guess maybe if the test passed to the function to be tested, mock values, such that the function is effectively evaluated symbolically (where any branching that depends on the inputs to the function, would maybe have the mocked object specify what the result of the conditional should be, with different tests for different cases?) ?
Or.. Can you explain how you write tests such that they truly function as specifications?
Good question - and there's been lots of work on this area. See for example property testing and fuzz testing, which can do something similar to what your second paragraph suggests.
You should be able to find a property testing library in your favourite language such as Hypothesis (python), Quickcheck (Haskell), Fastcheck (JS/typescript), etc.
A “specification” which is based on just “the desired behavior in a few specific examples” seems to allow undesired behavior without it being clear whether said behavior is allowed by the spec.
That to me doesn’t seem like what a “specification” is.
Now, maybe having good examples is more important for business use cases than having a specification.
That’s exactly what those tests are for. When you no longer have to worry if you invoked .foo() or .fooTypo(), you eliminated one class of bug. Namely trying to run things that do not exist.
Maybe you meant to invoke .bar(), but at least we know thanks to type checks that the target exists.
That seems super dumb indeed. I really wish JSON had an actual decimal type, we already have valid "e" notation (like 10e2), why not have a "d" notation, like 5d3 (meaning 5.3) or 5.3d.
It would be if it did any arithmetic. But maybe it works without any computation. Then the fp arithmetic is not a problem. Or could you provide an example?
Neither 0.1 nor 0.01 are representable exactly in binary (much like 1/3 is not representable exactly in decimal). Thus means that your cents, if any, are never precise. Not a big deal for Swift transfers where you can safely round to cents. But various commissions, taxes, and fees are often very small fractions, and rounding errors become noticeable with large quantities of small amounts added / subtracted. They may be too small to matter financially, but they are bothersome because your sums do not match exactly where they should, and this affects trust to the system.
IDK about banks; in one billing system where I was involved we used decimals (and clear rules of assignment of the rounding errors), in another, all amounts were in picodollars.
Yeah, they are not, but still, afaik when I have 0.1 or 0.01 in some variable and pass it around the system it doesn't suddenly change to 0.10000000000012 (or whatever), but the problem arises in case you want to make arithmetic with it, like 0.1 + 0.2 !== 0.3, which you do not do in this library, you just pass "calculated" value how much to send where and you are done.
Partially Im just playing the devils advocate here, I know that you have to use Decimals for working with prices etc., (maybe they did not use it because it will be pretty fat dependency), but still imho it's not needed.
Should be fine up to about 9 quadrillion[1]. Though with arithmetic it can get messy. The example doesn't show, but I hope dollar amounts are in cents and don't use decimals.
Maybe when sending money to Zimbabwe it could be an issue.
Thanks for bringing this up chuzz - we're actually only using base amounts for the money representation for amounts here. (no $2.3232321), only (amount: 232)
See the https://dinerojs.com/ package to see how money is handled (tldr it's a combination of currency and amount which helps us get to the root #)
That does not sound reasonable, why would the shim ever need to be signed with the same key? why can't a system supporting App Archive make an exception for the shim? if it can't for hardware reason, why can't they ask key owner to sign the shim app?
That would degrade security but is not as bad as requiring to give away the private key.
If they implemented this properly there wouldn't be any need for shim. So talk about ignoring signing requirements is meaningless. Adding single flag in the installed application table not to erase the application settings+icon cache would have been enough.
Using a shim application is a giant hack, allowing to implement this feature by updating only the Google Playstore without modifying the base OS and how it handles application installation and managing process. This is only relevant only because most android phone manufacturers are bad at providing Android updates. Some Google engineer probably felt very clever, that he came up a with a trick which in theory allows adding this feature to older phones that probably won't receive any Android system updates.
Google could have also allowed application developer to sign the shim. That way getting the benefits of archiving feature on older phones, without forcing app developer to give up the key.
reply