I mean, yeah, if I am using a library, as an user of this library, I would like to be able to handle the error myself. Having the library decide to panic, for example, is the opposite of it.
Not GP but bump allocation (OCaml's GC uses a bump allocator into the young heap) mitigates this somewhat, list nodes tend to be allocated near each other. It is worse than the guaranteed contiguous access patterns of a vector, but it's not completely scattered either.
> Software engineering is a collaborative process, not an adversarial one.
The collaborative process itself is adversarial. Capitulating to others when their contributions go against one's goal compromises that goal. Sometimes compromise to achieve a lesser goal is better than failing to achieving the full goal. But, when stakes are high (and Linux stakes are enormous) compromised goals are less appropriate. Linux and Linus are in a position not to have to compromise the goal.
If you've worked in SWE for long enough, you'll run into this kind of socially maladjusted petty tyrant many times in your career. It's fascinating to see so many of these tyrants exposing themselves in this thread, though.
Alternative answer: both versions will be picked up.
It's not always the correct solution, but sometimes it is. If I have a dependency that uses libUtil 2.0 and another that uses libUtil 3.0 but neither exposes types from libUtil externally, or I don't use functions that expose libUtil types, I shouldn't have to care about the conflict.
This points to a software best-practice: "Don't leak types from your dependencies." If your package depends on A, never emit one of A's structs.
Good luck finding a project of any complexity that manages to adhere to that kind of design sensibility religiously.
(I think the only language I've ever used that provided top-level support for recognizing that complexity was SML/NJ, and it's been so long that I don't remember exactly how it was done... Modules could take parameters so at the top level you could pass to each module what submodule it would be using, and only then could the module emit types originating from the submodule because the passing-in "app code" had visibility on the submodule to comprehend those types. It was... Exactly as un-ergonomic as you think. A real nightmare. "Turn your brain around backwards" kind of software architecting.)
I can think of plenty situations where you really want to use the dependency's types though. For instance the dependency provides some sort of data structure and you have one library that produces said data structure and a separate library that consumes it.
What you're describing with SML functors is essentially dependency injection I think; it's a good thing to have in the toolbox but not a universal solution either. (I do like functors for dependency injection, much more than the inscrutable goo it tends to be in OOP languages anyways)
I can think of those situations too, and in practice this is done all the time (by everyone I know, including me).
In theory... None of us should be doing it. Emitting raw underlying structures from a dependency coupled with ranged versioning means part of your API is under-specified; "And this function returns an argument, the type of which is whatever this third-party that we don't directly communicate with says the type is." That's hard to code against in the general case (but it works out often enough in the specific case that I think it's safe to do 95-ish percent of the time).
It works just fine in C land because modifying a struct in any way is an ABI breaking change, so in practice any struct type that is exported has to be automatically deemed frozen (except for major version upgrades where compat is explicitly not a goal).
Alternatively, it's a pointer to an opaque data structure. But then that fact (that it's a pointer) is frozen.
Either way, you can rely on dependencies not just pulling the rug from under you.
I like this answer. "It works just fine in C land because this is a completely impossible story in C land."
(I remember, ages ago, trying to wrap my head around Component Object Model. It took me awhile to grasp it in the abstract because, I finally realized, it was trying to solve a problem I'd never needed to solve before: ABI compatibility across closed-source binaries with different compilation architectures).
It's not "all the transitive dependencies". It's only the transitive dependencies you need to explicitly specify a version for because the one that was specified by your direct dependency is not appropriate for X reason.
What if libinsecure 0.2.1 is the version that introduces the vulnerability, do you still want your application to pick up the update?
I think the better model is that your package manager let you do exactly what you want -- override libuseful's dependency on libinsecure when building your app.
Of course there's no 0-risk version of any of this. But in my experience, bugs tend to get introduced with features, then slowly ironed out over patches and minor versions.
I want no security bugs, but as a heuristic, I'd strongly prefer the latest patch version of all libraries, even without perfect guarantees. Code rots, and most versioning schemes are designed with that in mind.
Except the only reason code "rots" is that the environment keeps changing as people chase the latest shiny thing. Moreover, it rots _faster_ once the assumption that everyone is going to constantly update get established, since it can be used to justify pushing non-working garbage, on the assumption "we'll fix it in an update".
This may sound judgy, but at the heart it's intended to be descriptive: there are two roughly stable states, and both have their problems.
> an electronic authentication method in which a user is granted access to a website or application only after successfully presenting two or more distinct types of evidence (or factors) to an authentication mechanism.
and concludes with (emphasis mine):
> For the average user, the smartphone has become a single point of failure, where the theft of one device and one piece of knowledge (the passcode) can lead to total financial compromise.
Furthermore, these days I enter the passcode on my phone very rarely (Android requires it after restarting the device or after some amount of time) - normally I use biometric authentication.
The linked WSJ article is a bit hyperbolic and typical journalism overreach by calling it an Apple "security vulnerability", which is bullshit IMO. If you watch the interview with the guy in jail, the main method by which he got people's security code is he asked them. That is, he would tell people he had drugs to sell them and wanted to give them info, so he would get their phone and ask them for their code to unlock it.
At least the WSJ report is honest when it says "The biggest loophole: You".
Also in-person theft is both something our civilisation understands and has adapted to, and it does not scale. So it's never going to be a problem the way say password re-use is or many other maladies from the use of "passwords" for online security.
Compromising the smartphone can let you get the password though, making it one factor. It would be more 2FA if you entered password on one device and used another (Yubikey, physical totp token) as a second factor.
The issue I'm having with this sort of "something you own and something you know/are" two-factor authentication is that it has some potential to cause violence - both can be beaten out of you:
https://www.citizen.co.za/network-news/lnn/article/banking-a...
This is true with 1FA too. 2FA is more effective at stopping the case where you're hacked and you don't even know it because your password was in a leak.
A TAN generator or security key stored in a drawer at home. At least it reduces the opportunities for theft since people don't carry these devices with them all the time as opposed to their phones. Opportunity makes the thief.
Yeah I often think the issue with cash and crypto is that it can be easily forced away from an individual by any sufficienty armed and unscrupulous party. Money in a financial institution tends to have an upper limit on what could be forced away in a single act, or at least a single transction cycle.
Staying anonymous. For every single multimillionaire or billionaire out there flaunting their wealth, there is another who's equally secretive about it. There are many folks with tens of billions in assets who don't make their wealth part of their brand.
Like that guy in Texas whose estate paid billions in tax when he passed away.
If people need "`expect` scripting and a few open source packages [to] automate it to be 1 factor", it is effectively 2 factor for 99.9% of the population.
Also, if someone uses a password manager to store both the password and the OTP credential, that is still an improvement to security. Intercepting (e.g. shoulder surfing) or guessing the password is no longer enough, an attacker needs to get into the password manager's vault.